0


AI Safety 之 AI Standard

Part1:Standards about AI and functional safety

1. AI General standards

ISO 22989 - Information technology — Artificial intelligence — Artificial intelligence concepts and terminology

  • Provides standardized concepts and terminology to help AI technology to be better understood and used by a broader set of stakeholders.
  • Including: Chapter 5 AI concepts Chapter 6 AI system life cycle Chapter 7 AI system functional overview Chapter 8-AI ecosystem Chapter 9-Field of AI Chapter 10-Application of AI systems

ISO/IEC JTC1 TR 5469 on “Artificial intelligence — Functional safety and AI systems”

  • Now the Draft technical report stage.
  • The purpose of this document is to enable the developer of safety-related systems to appropriately apply AI technologies as part of safety functions by fostering awareness of the properties, functional safety risk factors, available functional safety methods and potential constraints of AI technologies.
  • This document also provides information on the challenges and solution concepts related to the functional safety of AI systems.

2. Automotive/Road vehicles

  • ISO 26262-6, Road vehicles — Functional safety — Part 6: Product development at the software level The SW development process to avoid the systematic failures.
  • ISO 21448:2022, Road vehicles — Safety of the intended functionality – Annex D.2–“Implications for Machine Learning”
  • ISO/TR 4804:2020, Road vehicles — Safety and cybersecurity for automated driving systems — Design, verification and validation. --Annex B–Using deep neural networks to implement safety-related elements for automated driving systems. - The aim of this annex is to provide an overview of the challenges for achieving and assuring the safety of DNNs in automated driving, propose solutions that address the safety challenges, and conduct a brief survey on the current state-of-the-art regarding these challenges.- To convert Safety First White Paper into an ISO/TR creating an early 1st edition to worldwide address this field by an ISO standardization activity.
  • ISO/CD TS 5083, Road vehicles — Safety for automated driving systems — Design, verification and validation ISO TS 5083 evolves from ISO TR 4804, and go for necessary enhancements and extensions to cover scope in width and depth.
  • ISO/CD PAS 8800, Road Vehicles — Safety and artificial intelligence The purpose of this document is to provide industry-specific guidance on the use of AI and, in particular, ML-based systems involved in safety-related functions of road vehicles.

Relationship between AI related standards

在这里插入图片描述

Part 2: ISO 8800 Road Vehicles - Safety and artificial intelligence

ISO 8800 current status

仍在开发阶段,暂时还未正式发布,预计24年正式发布,目前手里的也是行业专家review版本,很多内容处于Work in Progress,待完善

ISO 8800 structure & overview

Clause 1:Scope
Clause 2:Normative reference
Clause 3:Terms and definitions
Clause 4:Abbreviations
Clause 5:Relation to other safety and AI related standards
Clause 6:AI within the context of road vehicles systems safety engineering 车辆系统中的AI系统,定义了AI系统的参考架构及风险行为的因果模型,以及与ISO 26262及ISO 21448之间的关系
Clause 7:Safety lifecycle for AI systems
Clause 8:Derivation of safety requirements on AI systems
Clause 9:Selection of AI-Measures and design-related considerations
Clause 10:Data-related considerations
Clause 11:Verification and validation of the AI system
Clause 12:Risk evaluation and safety analysis
Clause 13:Assurance arguments for AI systems
Clause 14:Measures during operation and continuous assurance
Clause 15:Confidence in use of AI-related software tools
Annex A: Example assurance argumentation structure for an AI-based vehicle function (Work-in-Progress)
Annex B:ISO 26262-6:2018 Gap Analysis for ML
Annex C: Detailed considerations on safety-related properties (Work-in-Progress)

AI技术尤其是ML在汽车行业应用越来越广泛,但是关于AI及ML相关的指导规范仍然缺失,如,详细规范,设计,V&V方案等;

本标准的目的也是为了AI在汽车行业的应用提供指导规范,不局限于ML;
在这里插入图片描述

Clause5 Relation to other safety and AI related standard

  1. This document provides specific guidance for providing an assurance argument that a tolerable level of residual risk has been achieved for the use of AI in safety-related vehicle functions.| Assurance argument关于在安全相关的车辆功能中使用AI,已经达到了可容忍的残余风险水平,本标准提供了具体的指导;
  2. 与车辆安全相关的标准的关系:同样关注于两个方面:
  • 功能安全相关的风险–参考ISO26262:通过提供相关条款的解释来实现;– Clause 6.4
  • 性能局限–参考ISO21448 :提出一个因果模型causal model来提出相应的安全需求及风险消除措施;-- Clause 6.5
  • 本标准主要关注于指导如何将TSR转化成AI系统的安全相关的需求,聚焦于设计design及实施operation阶段
  1. 与ISO 5469
  • ISO TR5469提供了通用的指导指南,关于AI 技术作为安全功能的一部分的应用,没有局限于具体的行业类别;
  • ISO TR 5469提供了分类方案(Classification schemes)来确定AL/ML功能的安全需求;包括usage level 和AI technology class;
  • ISO TR 5469提供了Class I, Class II, Class III 三种类别,在ISO 8800中主要提供具体的指导规范关于Class II,对应的Usage level是 A,B1,B2,C1,C2;
  • General classification scheme for applicability of AI in the safety related E/E/PE system: (From ISO TR 5469)

在这里插入图片描述
在这里插入图片描述

  1. AI Technology Class
  • Class I is assigned if AI technology can be developed and reviewed using existing functional safety International Standards.
  • Class II is assigned if AI technology cannot be fully developed and reviewed using existing functional safety International Standards, but it is still possible to identify the desired properties and the means to achieve them by a set of methods and techniques.
  • Class III is assigned if AI technology cannot be developed and reviewed using existing functional safety International Standards and it is also not possible to identify a set of properties with related methods and techniques to achieve them.
  1. AI Application and Usage Level
  • Usage Level A1 is assigned when the AI technology is used in a functional safety-relevant E/E/PE system and where automated decision-making of the system function using AI technology is possible;
  • Usage Level A2 is assigned when the AI technology is used in a safety-relevant E/E/PE system and where no automated decision-making of the system function using AI technology is possible (e.g. AI technology is used for diagnostic functionality within the E/E/PE system);
  • Usage Level B1 is assigned when the AI technology is used only during the development of the safety-relevant E/E/PE system (e.g. an offline support tool) and where automated decision-making of the function developed using AI technology is possible;
  • Usage Level B2 is assigned when the AI technology is used only during the development of the safety-relevant E/E/PE system (e.g. an offline support tool) and where no automated decision-making of the function is possible:
  • Usage Level C is assigned when the AI technology is not part of a functional safety function in the E/E/PE system, but can have an indirect impact on the function.
  • Usage Level D is assigned if the AI technology is not part of a safety function in the E/E/PE system and has no impact on the safety function due to sufficient segregation and behaviour control.

Clause 6:AI within the context of road vehicles systems safety engineering

Objectives:

  • Define a reference architecture
  • Define a causal model
  • Relation with Fusa ISO26262 & Sotif ISO21443

6.2 Reference architecture for AI-based functions

AI系统的参考架构如图,AI system从Source中获取输入,同时根据control signals执行任务,输出给到consumer;其中AI系统大概由AI前处理,AI模型,AI后处理三部分组成;
在这里插入图片描述
关于AI Model,可以用一个分层的架构来进一步解释不同的技术层级及处理流程:

  • 针对高层级的元素,其安全性通过评判相关Properities来确保;
  • 针对底层级的元素,其安全性通过参考ISO2626 - 6 中的要求来确保;

在这里插入图片描述
在这里插入图片描述
参考ISO5469 Clause7中关于 AI model中涉及到的element:
在这里插入图片描述
在这里插入图片描述

6.3 A causal model of hazardous behaviour of AI systems

  1. 安全相关的错误及其根本原因的分类如下:在这里插入图片描述
  2. AI系统失效的原因及影响链(影响最终的整车表现层面)在这里插入图片描述
  3. 举例解释说明AI系统的错误的定位和概念在这里插入图片描述 整车层面非预期的安全行为是:非预期的车辆紧急制动,其中跟AI系统的原因是:Trained ML model: Incorrect classification of the trained ML model持续一定的时间导致无输出不存在的目标物,并确认输出(超过debounce时间);其他部件元素的失效不是AI系统,可以归类为传统的安全组件分析,不在此标准的scope;
  4. 安全相关的属性 Safety-related properities 安全相关的属性指的是训练过的AI模型或AI系统具体的属性,用来支持 Safety assurance argumentation. 主要思路是对于AI系统的不同属性,使用一种系统化的分析方法来确定其故障及风险处理方法;在这里插入图片描述在这里插入图片描述

基于上图所分析分解出的影响因素类别,对于其中的各种属性进行分析,确定其失效模式以及影响,并设计相关的预防消除措施;
在这里插入图片描述

6.4 AI与功能安全

关于ML相关的话题与ISO 26262 和 ISO 21448之间的关系提供指导指南

  1. 在架构设计阶段,一般会导出对于ML组件的需求,系统需求会进一步分解为细化的组件需求,包括足够具体的功能需求;对于ML通常比较难有足够具体的功能需求,但是训练和评估规范可以作为训练和评估需求,包括对于训练和评估数据的属性要求;另外,与SOTIF合并的是为解决性能不足提出的定量需求,如,性能KPI;因此,训练和评估需求及性能KPI是安全相关的需求
  2. 训练和评估需求以及性能KPI测量会有功能安全典型的系统故障:
  • 训练和评估会有系统故障:如选择数据集,配置hyperparameter;
  • 性能KPI测量会有系统故障:如选择和实施KPI计算过程
  1. Proposed additional ML/NN considerations for ISO 26262-6
  • 复杂的ML算法主要由训练数据集,ML模型架构及训练过程决定,其中训练过程反映了难以通过分析来理解。因此,需要通过执行适当的测试和分析ML特定的故障模式来评估分配给ML算法的功能的安全性。
  • ML model训练结果,需要通过validation set来确定是否通过,不通过的话需要细化hyper-parameter并重新训练;
  • ML model训练完成后,训练后的系统需要通过test dataset来评估是否通过,这也是V&V活动的一部分;
  • 当满足测试标准后,ML model的训练参数便认为可以接受;如果在进一步的测试中观察到模型的不当行为或性能不足,则会在考虑新信息的情况下重复整个过程;
  • ML软件的可信任度可以通过分析其可解释性来提高
  • Requirements: 需要做分析去证明ML模型的所有feature与ML组件的输出有因果关系; 应对以下方面进行分析,以评估ML/NN训练程序的充分性[Saley和Czanecki(2018)]:- a)对操作环境和训练环境之间差异的控制(另见要求9.4.6);- b)分配转移的处理;- c)损失函数中的安全表示;和- d)正则化的充分性

Clause 7:Safety lifecycle for AI systems AI

Objective

  1. 定义AI系统的安全生命周期及相关活动,以便提供证据证明AI系统的残余错误不会导致不合理的风险;
  2. 描述一种数据驱动,迭代的方法来开发,评估,持续确保基于AI的功能正常
  3. 描述如何利用对开发和运行过程中性能和潜在不足的理解来完善需求和设计决策,同时系统地收集安全案例的证据。

Reference AI safety lifecycle. AI系统参考安全生命周期

在这里插入图片描述

Interaction with system level safety activities

AI 安全生命周期活动与ISO26262以及ISO21448之间的关系如下图:其中9和11为AI系统所必须单独考虑的
在这里插入图片描述

Derivation of safety requirements on the AI system - Clause 8

当有TSR给AI系统时,AI系统的安全生命活动便正式触发开始,这些活动包括,性能KPIs,设计要求,数据收集要求,训练要求以及 AI系统性能评估要求

  1. 迭代需求的触发: 能满足分配的安全需求及安全相关属性的AI系统无法可行的开发; 无法找到合适的训练数据以及测试数据; 安全需求及安全相关的属性无法提供有足够信心的证据;
  2. 需要将高层级的安全需求转化成AI系统的技术安全需求,转化后的需求需要可以直接被测量,被影响,如,定义AI系统的性能评估度量定义;
  3. 安全需求的完整性和有效性证明,包括系统级别的风险接受标准;
  4. 对于AI系统的安全需求需要持续迭代,基于开发过程及后期运行过程中的观察
  5. 导出的AI安全需求需要与系统规范,提供设计一致;
  6. Safety artefacts: AI系统的安全需求规范 安全需求的完整性及有效性证据

Selection of AI measures and designed-related considerations - Clause 9, data specification and collection - Clause 10, and safety analysis - Clause 12

需要与系统及需求建立双向追溯;

Evaluation of the safety assurance argument - Clause 13

Operations: deployment, monitoring, continuous assurance - Clause 14

在这里插入图片描述

AI system development should include the following sub-phases:

  1. Selection of an appropriate AI technology and design of the AI system.
  2. Data specification and data collection
  3. Development/training and test of the AI model
  4. Design of architectural measures for reducing residual failures
  5. Measurement of the safety-related performance of the ML system;
  6. Evaluation of the causes and impact of insufficiencies in the ML system.

Safety artefacts

  1. Plan for the (continuous development) of the AI system.
  2. AI system design including justification for design decisions and selection of AI technologies used and traceability to safety requirements on the AI system.
  3. Specification of datasets and traceability to safety requirements on the AI system.
  4. Datasets used for training and test.
  5. A verified AI system consisting of a combination of SW and HW, for integration at system level.
  6. Verification and validation strategy including traceability to the safety requirements on the AI system.
  7. Results of verification and validation on all work products of the AI system development.
  8. Results of safety analysis

Clause 8: Deviation of safety requirements on AI systems

Objective

specify the safety requirements of AI systems
elaborate and advance / enhance the safety requirements
Specify precisely the limitation (insufficiencies) of the AI system over its operational domain.

Workflow

安全需求导出工作流程如下图所示:

  • 高层级的的安全需求, 主要包括Fusa和Sotif分解出技术安全需求TSR,而对于Sotif而言,技术AI需求更多是反应在 在ODD内错误模式发生的次数低于一定的阈值
  • 明确triggering condition/root cause后,一方面可以通过分解出新的技术安全需求来防止AI系统产生相应的错误,另一方面,无法完全避免错误产生时,可以通过提出在某些triggering condition下限制性能的方法来实现;
  • 在实际的operation阶段,工程化的AI系统需要支持field monitoring,以便发现之前未识别的triggering conditions;在这里插入图片描述
  1. 对于导出的Sotif相关的AI安全需求,1. 能够直接追溯到更高层级的需求,假设或者关键场景;2. 能够解决造成功能不足/triggering condition的 潜在的influencing factors/root causes;
  2. 对于导出的Sotif相关的AI安全需求,需要为其充分性提供适当的理由/证据
  3. 对于传统的硬件随即故障和系统性软件故障,可以参考ISO2626 Part6来argue其安全性;
  4. 对于Sotif,AI系统不能完全满足的需求需要识别出来并记录在规范中;

Specific considerations for supervised machine learning

  1. Directly measurable targets derived from higher-level safety requirements 有直接可测量目标的需求
  • FuSa相关由正常的开发流程,分解出TSR给到AI系统;
  • Sotif相关,更多的是以如下方式来表述:‘the occurrence of a particular error pattern shall be lower than a given threshold’.
  • 对于在特定ODD内的AI功能,安全规范需要包括相应的接受准则-acceptance criterion,类似于PFD (probability of failure on demand),但是,实际上,无法直接计算probability of error,文章提出了一种计算方式:如下图:一方面基于足量的采样样本数据来估算一个相对的故障发生概率,另一方面加上控制uncertainty的AI流程需求在这里插入图片描述

因此,对于ML 相关components,在整个的工程开发流程中,我们可以通过管理造成不确定的源头来降低风险(managing uncertainty sources);此处将 uncertainty 称为potential influencing factors / root causes,主要可以分为4类:如下表
在这里插入图片描述
不同 Influencing factor class 对应的需求示例:
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

  1. Utilizing safety-related properties to restrict the occurrence of AI output insufficiencies 利用安全相关的属性来限制AI输出不足情况的发生 根据某些特殊的场景,或者基于安全相关的属性来逐个检查根本原因,从而得到相应的安全需求,更多的是一种从下至上的方法,不能保证分析的详尽;在这里插入图片描述
  2. Metrics, measurements and threshold design. 通过提供一些可量化的度量指标作为安全相关的证据;度量指标需要考虑到:
  • Metrics and measurement methods for specified properties of AI technologies. 对于AI技术的特定属性需要定义度量及测量方法
  • Measurements and evaluation of the AI technologies with respect to these properties. 对于AI技术的特定属性需要定义测量及评估方法
  • Identification into what type of applications the above are to be applied. Relating the metrics to the properties. 确定AI技术的具体应用类型,从而确定相关属性的度量
  • The impact of the chosen metrics and measurement methods on the safety evaluation 所采用的度量及测量方法对于安全评估的影响
  • Identifying whether the design approach taken by the AI-system impacts the chosen metrics/measurement methods.确定AI系统采用的设计方法是否影响所选择的度量及测量方法
  • Datasets might be needed to measure each property.可能需要数据集来测量相关的属性
  • The attributes of the datasets and the manner that need to be characterised. 数据集的属性和行为需要具化

Overall, deciding the acceptable threshold in the used metrics requires justification.
The justification can come from past product experiences, commonly agreed industry consensus, system analysis, expert judgements, or experiments.
The decision may also involve multiple factors where trade-offs should be taken into considerations.

  1. Discussion and recommendations 关于分解出安全需求的实际建议

Summarize some practical recommendations in the derivation of requirements.

  • 由于ML-based models缺乏普遍适用性,因此,在规范中需要明确ODD来定义其工作范围;
  • 需要考虑到数理统计的所得到的结果往往基于理想假设,得到的结果可以作为一个下限值参考;
  • 需要考虑到AI系统的security-induced 安全风险;
  • 处于测试验证的目的,最好能够提供ML-based component的接口规范,便于理解和测试;
  • 传统基于规则的软件开发,所提的一些质量要求,目的是为了减少错误,但是此类规则可能对于AI系统的中不确定性问题的解决作用不大;

Clause 9: Selection of AI-Measures and design-related considerations

Objective 目的:

  1. 选择合适的AI技术/方法并提供理由
  2. 确定合适的架构设计方面的措施-Architectural measures & 开发过程方面的措施-Development measures,从而确保AI可以完成相应的安全需求以及优化安全相关的属性 …

基本要求

  1. 架构层面的措施要保证AI component的失效不会违背安全需求
  2. 架构层面的措施要确保能探测并缓解AI component失效的影响(如,失效时考虑back-up)
  3. 架构层面,AI系统应该支持监控输入数据的变化,distribution shift,(与开发阶段所使用的数据相比)
  4. AI的不确定性uncertainty应该在设计阶段去评估,确定部署的信心;
  5. 开发层面的措施Development measures应该缓解不正确/不安全的AI组件输出;
  6. 需要进行安全分析,Safety analysis来评估安全需求的实现;
  7. 开发层面的措施需要提高AI组件的可解释性Explainability,以便建立trustworthiness,帮助Validation。

提高AI trustworhtiness的措施

在这里插入图片描述

Types of AI Measures
  1. Development Measure: AI system和AI component合适的开发过程、活动确保完成安全需求,加强AI trustworhtiness。
  2. Architectural Measure: AI system和AI component实现的具体的技术解决方案,以确保完成安全需求,加强AI trustworthiness。(架构措施会对整个的AI system和AI component有实际的影响)
  3. Overlap:以上的两类措施实际会有重叠,如 Out-of-distribution输入数据探测,Uncertainty/Confidence预估来识别distribution shifts。
AI Trustworthiness characteristics 可信度相关的特性(参考ISO 22989-Clause 5.15 和 ISO 25010 Clause 4.1)

Trustworthiness of AI systems refers to characteristics which help relevant stakeholders understand whether the AI system meets their expectations.(ISO 22989)

  1. AI Robustness: For AI systems, robustness can describe their ability to maintain their level of performance, as intended by their developers, under any circumstances. Robustness can encompass other attributes such as resilience and reliability. (ISO 22989)
  2. AI Generalization: The ability to adapt and perform well on the data from the same distribution that was previously unseen.
  3. AI Reliability: Reliability is the ability of a system or an entity within that system to perform its required functions under stated conditions for a specific period of time. (ISO 22989)
  4. AI Resilience: Resilience is the ability of the system to recover operational condition quickly following an incident.(ISO 22989)
  5. AI Controllability: Controllability is the property of an AI system that an external agent can intervene in its functioning.(ISO 22989)
  6. AI Explainability: Explainability is the property of an AI system that means that the important factors influencing a decision can be expressed in a way that humans can understand. Explainability can also be a useful means of validating the AI system,(ISO 22989)
  7. AI Predictability: Predictability is the property of an AI system that enables reliable assumptions by stakeholders about the output.(ISO 22989)
  8. AI Transparency: The ability of communicating information about AI system / AI component to stakeholders such as the details about data used, processing and level of automation used to make related decisions.(ISO 22989)
  9. AI Maintainbility – No safety related / The ability of effective and efficient modification of AI system / AI component.
  10. AI Reusability – No safety related / The ability to reuse the AI component during development, after deployment and in different systems/projects. Selection of AI Trustworthiness characteristics Trade-offs between the AI element performance and its trustworthiness characteristics are expected。 需要做出权衡,哪些特性是最重要的,哪些是次重要的 Example of relevance between safety-related properties and AI measures (标准work in progress,内容暂时缺失) 主要是需要定义对于不同的特性特征 AI characteristic,给出相应的AI measures;

Examples of development measures for AI System (Work in Progress)

Transparent and analyzable AI architecture
  1. 目的是为了证明 影响AI安全输出以及AI活动交付物的 开发过程相关活动,其开发过程中的系统性失效问题都已经经过安全分析来避免了;- 如:Hyperparameter的设定都已经经过 pear review,- Safety analysis方法包括 FMEA, HAZOP等,分析相关的deviation是否能够导致安全目标的违背
  2. SW架构中的SW单元的识别 在ISO26262中,SW unit及其接口,SW架构等是软件开发过程的核心,同时也是V&V的重要内容;好处:- 清楚的定义软件单元及接口,尽早地评估每一个软件单元的更改带来的影响,并消除;- 模块化测试,降低V&V过程的复杂度,同时便于故障定位;- 模块化方便进行增量测试,优化测试流程;
  3. NN中的SW unit定义并不清晰,从而难以继承SW unit的优势,当前SW unit识别所面临的问题:- 尽管NN 架构已经说明了NN的设计过程,如通过计算图,NN层等概念,但是,每一个单独的神经元和层并不是单独独立可测试的;- 根据神经元和层的大小以及复杂度,无法进行单元区分;- 与传统的软件相比,NN所实施的功能无法清楚的映射到有清晰边界的子网络,而且当NN经过重新训练后,其功能分配也会发生变化;
  4. 建议指导:鼓励将基于NN来实现复杂功能的系统能够被拆分成独立的element,以方便进行测试和分析;- 定义SW unit的关键几条: - 清晰定义的接口- 清晰定义的功能- 能够在单元接口层面实现V&V测试活动,如单元测试
  5. AI Model Modification (Work in Progress) 由于功能/性能的不足或者使用环境的变化,导致AI model的能力无法完成给定的活动,因此会涉及到AI model modification.Criteria for Retraining: 判断AI系统无法安全的完成其任务,需要Re-training,partial or full的参考依据:AI系统涉及ODD与实际运行区域之间的差异: 如 Distributional shift开发过程中AI所拥有实现的能力与实际能力要求之间的差异; ODD内的性能不足 ODD内AI应用需求的变化 ODD内法规,标准的变化
  6. Out of distribution input data OOD (Work in Progress) OOD error指的是ML model因为OOD样本的问题导致的分类错误;如,遇到未知的路标,罕见的目标物及场景,这些在之前的训练数据集中没有体现,导致模型未经过训练;
  7. Robust Learning (Work in Progress)
  8. Attention / Saliency Maps 显著图(Work in Progress)
  9. Augmentation of Data (Work in Progress) Structural Coverage of AI component 推荐的关于结构覆盖度量的集中方法,软件单元层面 Neuron Coverage Sign-Sign Coverage Sign-Value Coverage Value-Value Coverage Value-Sign Coverage (Test) Data augmentation based on dataset properties Data Augmentation to mitigate sensor error
  10. Optimization of Hyperparameters
  11. Verifying feature selection 训练数据集的选择和收集影响AI系统所能拥有的能力,在数据的选择上,需要考虑到实际AI系统需要的能力,有针对的选择训练数据集;
  12. Monitoring multiple scores 训练完模型后,可以通过打分来评估学习的正确率;

Examples of architectural measures for control and mitigation of AI-related risk

提供了架构措施来控制和消除基于AI功能的性能不足的问题

  1. Measures for Architectural Redundancy. (Work in Progress) The architectural redundancy is expected to either to enhance the performance or to add different features or functionalities or to ensure that the failure of the AI component is detected and mitigated. independence is also expected and to address the indepedence, diveryits would be necessary - Usage to AI components and non-AI component (Work in Progress)Non-AI based component can be used to perform the plausibility checks or verification of the output generated by the AI components. 非AI的组件可以用来做Plausibility check,并对AI组件的输出做验证;- 利用非AI的组件做冗余功能 1.1 Ensembles (Work in Progress)集成学习算法 Ensemble methods combine the predictions obtained by multiple models to create a consolidated output. 参考文献解释:Excerpt from: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9468225 References: https://arxiv.org/pdf/1905.04223.pdf 1.2 Diversity using feature specific and end-to-end AI component (Work in Progress) 针对目标物识别,车道线识别等,采用不同的方法来做:- The AI components on one side are learned to identify specific obstacles and then do the specific path/trajectory planning to avoid them。一种是通过识别障碍物,基于此来进行规划,避障;- AI componets on the other side are leaned to do the end-to-end task by identfying the drivable area without necessarily identying the obstacles or lanes. 另一种是直接基于End to end的方法确定可行驶区域,而不必识别相关的障碍物等;1.3 N-Version Programming (Work in Progress)

2. Measures to determine need for re-training (Work in Progress)
确定AI 组件是否在ODD内,是否满足功能及性能边界

  • Dynamic Environment Monitoring 环境监控 Reference: https://arxiv.org/pdf/1905.04223.pdf --> Table 7
  • Distribution shift monitoring (Work in Progress) detecting and quantifying distribution shifts, between development and deployment, of Machine Learning modelsTypes of Distribution shifts (Work in Progress)在这里插入图片描述在这里插入图片描述在这里插入图片描述在这里插入图片描述

在这里插入图片描述

在这里插入图片描述
在这里插入图片描述
Considerations related to the target execution environment (Work in Progress)
确保AI功能的会因为目标执行环境的变化的或者部署过程中的修改导致性能受到影响

Safety Analyses of target execution environment (Work in Progress)
Compatibility between the development and target environment (Work in Progress)

标签: 人工智能

本文转载自: https://blog.csdn.net/Aleeex_Zhao/article/details/141228810
版权归原作者 Alex_Zhao_JLU 所有, 如有侵权,请联系我们删除。

“AI Safety 之 AI Standard”的评论:

还没有评论