【中译】美国“人工智能法”:关于安全、可靠和值得信赖的人工智能开发和使用的行政命令 (四)
(上接第(三)部分第5.3-7条)
Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
关于安全、可靠和值得信赖的人工智能开发和使用的行政命令(四)
2023年10月30日
Sec.8. Protecting Consumers, Patients, Passengers, and Students.
(a) Independent regulatory agencies are encouraged, as they deem appropriate, to consider using their full range of authorities to protect American consumers from fraud, discrimination, and threats to privacy and to address other risks that may arise from the use of AI, including risks to financial stability, and to consider rulemaking, as well as emphasizing or clarifying where existing regulations and guidance apply to AI, including clarifying the responsibility of regulated entities to conduct due diligence on and monitor any third-party AI services they use, and emphasizing or clarifying requirements and expectations related to the transparency of AI models and regulated entities’ ability to explain their use of AI models.
(b) To help ensure the safe, responsible deployment and use of AI in the healthcare, public-health, and human-services sectors:
(i) Within 90 days of the date of this order, the Secretary of HHS shall, in consultation with the Secretary of Defense and the Secretary of Veterans Affairs, establish an HHS AI Task Force that shall, within 365 days of its creation, develop a strategic plan that includes policies and frameworks — possibly including regulatory action, as appropriate — on responsible deployment and use of AI and AI-enabled technologies in the health and human services sector (including research and discovery, drug and device safety, healthcare delivery and financing, and public health), and identify appropriate guidance and resources to promote that deployment, including in the following areas:
(A) development, maintenance, and use of predictive and generative AI-enabled technologies in healthcare delivery and financing — including quality measurement, performance improvement, program integrity, benefits administration, and patient experience — taking into account considerations such as appropriate human oversight of the application of AI-generated output;
(B) long-term safety and real-world performance monitoring of AI-enabled technologies in the health and human services sector, including clinically relevant or significant modifications and performance across population groups, with a means to communicate product updates to regulators, developers, and users;
(C) incorporation of equity principles in AI-enabled technologies used in the health and human services sector, using disaggregated data on affected populations and representative population data sets when developing new models, monitoring algorithmic performance against discrimination and bias in existing models, and helping to identify and mitigate discrimination and bias in current systems;
(D) incorporation of safety, privacy, and security standards into the software-development lifecycle for protection of personally identifiable information, including measures to address AI-enhanced cybersecurity threats in the health and human services sector;
(E) development, maintenance, and availability of documentation to help users determine appropriate and safe uses of AI in local settings in the health and human services sector;
(F) work to be done with State, local, Tribal, and territorial health and human services agencies to advance positive use cases and best practices for use of AI in local settings; and
(G) identification of uses of AI to promote workplace efficiency and satisfaction in the health and human services sector, including reducing administrative burdens.
(ii) Within 180 days of the date of this order, the Secretary of HHS shall direct HHS components, as the Secretary of HHS deems appropriate, to develop a strategy, in consultation with relevant agencies, to determine whether AI-enabled technologies in the health and human services sector maintain appropriate levels of quality, including, as appropriate, in the areas described in subsection (b)(i) of this section. This work shall include the development of AI assurance policy — to evaluate important aspects of the performance of AI-enabled healthcare tools — and infrastructure needs for enabling pre-market assessment and post-market oversight of AI-enabled healthcare-technology algorithmic system performance against real-world data.
(iii) Within 180 days of the date of this order, the Secretary of HHS shall, in consultation with relevant agencies as the Secretary of HHS deems appropriate, consider appropriate actions to advance the prompt understanding of, and compliance with, Federal nondiscrimination laws by health and human services providers that receive Federal financial assistance, as well as how those laws relate to AI. Such actions may include:
(A) convening and providing technical assistance to health and human services providers and payers about their obligations under Federal nondiscrimination and privacy laws as they relate to AI and the potential consequences of noncompliance; and
(B) issuing guidance, or taking other action as appropriate, in response to any complaints or other reports of noncompliance with Federal nondiscrimination and privacy laws as they relate to AI.
(iv) Within 365 days of the date of this order, the Secretary of HHS shall, in consultation with the Secretary of Defense and the Secretary of Veterans Affairs, establish an AI safety program that, in partnership with voluntary federally listed Patient Safety Organizations:
(A) establishes a common framework for approaches to identifying and capturing clinical errors resulting from AI deployed in healthcare settings as well as specifications for a central tracking repository for associated incidents that cause harm, including through bias or discrimination, to patients, caregivers, or other parties;
(B) analyzes captured data and generated evidence to develop, wherever appropriate, recommendations, best practices, or other informal guidelines aimed at avoiding these harms; and
(C) disseminates those recommendations, best practices, or other informal guidance to appropriate stakeholders, including healthcare providers.
(v) Within 365 days of the date of this order, the Secretary of HHS shall develop a strategy for regulating the use of AI or AI-enabled tools in drug-development processes. The strategy shall, at a minimum:
(A) define the objectives, goals, and high-level principles required for appropriate regulation throughout each phase of drug development;
(B) identify areas where future rulemaking, guidance, or additional statutory authority may be necessary to implement such a regulatory system;
(C) identify the existing budget, resources, personnel, and potential for new public/private partnerships necessary for such a regulatory system; and
(D) consider risks identified by the actions undertaken to implement section 4 of this order.
(c) To promote the safe and responsible development and use of AI in the transportation sector, in consultation with relevant agencies:
(i) Within 30 days of the date of this order, the Secretary of Transportation shall direct the Nontraditional and Emerging Transportation Technology (NETT) Council to assess the need for information, technical assistance, and guidance regarding the use of AI in transportation. The Secretary of Transportation shall further direct the NETT Council, as part of any such efforts, to:
(A) support existing and future initiatives to pilot transportation-related applications of AI, as they align with policy priorities articulated in the Department of Transportation’s (DOT) Innovation Principles, including, as appropriate, through technical assistance and connecting stakeholders;
(B) evaluate the outcomes of such pilot programs in order to assess when DOT, or other Federal or State agencies, have sufficient information to take regulatory actions, as appropriate, and recommend appropriate actions when that information is available; and
(C) establish a new DOT Cross-Modal Executive Working Group, which will consist of members from different divisions of DOT and coordinate applicable work among these divisions, to solicit and use relevant input from appropriate stakeholders.
(ii) Within 90 days of the date of this order, the Secretary of Transportation shall direct appropriate Federal Advisory Committees of the DOT to provide advice on the safe and responsible use of AI in transportation. The committees shall include the Advanced Aviation Advisory Committee, the Transforming Transportation Advisory Committee, and the Intelligent Transportation Systems Program Advisory Committee.
(iii) Within 180 days of the date of this order, the Secretary of Transportation shall direct the Advanced Research Projects Agency-Infrastructure (ARPA-I) to explore the transportation-related opportunities and challenges of AI — including regarding software-defined AI enhancements impacting autonomous mobility ecosystems. The Secretary of Transportation shall further encourage ARPA-I to prioritize the allocation of grants to those opportunities, as appropriate. The work tasked to ARPA-I shall include soliciting input on these topics through a public consultation process, such as an RFI.
(d) To help ensure the responsible development and deployment of AI in the education sector, the Secretary of Education shall, within 365 days of the date of this order, develop resources, policies, and guidance regarding AI. These resources shall address safe, responsible, and nondiscriminatory uses of AI in education, including the impact AI systems have on vulnerable and underserved communities, and shall be developed in consultation with stakeholders as appropriate. They shall also include the development of an “AI toolkit” for education leaders implementing recommendations from the Department of Education’s AI and the Future of Teaching and Learning report, including appropriate human review of AI decisions, designing AI systems to enhance trust and safety and align with privacy-related laws and regulations in the educational context, and developing education-specific guardrails.
(e) The Federal Communications Commission is encouraged to consider actions related to how AI will affect communications networks and consumers, including by:
(i) examining the potential for AI to improve spectrum management, increase the efficiency of non-Federal spectrum usage, and expand opportunities for the sharing of non-Federal spectrum;
(ii) coordinating with the National Telecommunications and Information Administration to create opportunities for sharing spectrum between Federal and non-Federal spectrum operations;
(iii) providing support for efforts to improve network security, resiliency, and interoperability using next-generation technologies that incorporate AI, including self-healing networks, 6G, and Open RAN; and
(iv) encouraging, including through rulemaking, efforts to combat unwanted robocalls and robotexts that are facilitated or exacerbated by AI and to deploy AI technologies that better serve consumers by blocking unwanted robocalls and robotexts.
第8条 保护消费者、患者、乘客和学生
(a)鼓励独立监管机构在其认为适当的情况下,充分运用职权保护美国消费者免受欺诈、歧视和隐私威胁,解决人工智能使用可能产生的金融稳定性风险等其他风险,并强调或澄清现行法规和指南的哪些内容适用于人工智能,包括澄清监管对象对其使用的任何第三方人工智能服务进行尽职调查和监控的责任;强调或澄清涉及人工智能模型透明度和监管对象解释其使用人工智能模型的能力的要求和期望。
(b) 为帮助确保人工智能在医疗卫生、公共卫生和公共事业部门安全、负责任地部署和使用:
(i)自本命令发布之日起90天内,卫生和公众服务部部长应当与国防部部长和退伍军人事务部部长协商,共同成立一个卫生和公众事务部人工智能工作组,该工作组成立后365天内,应当就如何在卫生和公共服务行业(包括研究和发现、药物和设备安全、医疗卫生服务和融资以及公共卫生),负责任地部署和使用人工智能和人工智能技术,制定一项包含政策和框架的战略计划--可能包括适当的监管行动,并明确在如下方面推动上述资源部署的适当指导和促进措施:
(A)在医疗卫生服务和融资中开发、维护和使用预测性和生成式人工智能技术,包括质量监测、绩效改进、项目完整性、福利管理和患者体验,兼顾人工智能输出应用的适当人工监督等因素;
(B)对卫生和公共事业领域的人工智能技术进行长期安全和现实世界性能监测,包括跨群体的临床相关的或重大的修改和实施,确保向监管机构、开发人员和用户传达产品更新;
(C)在卫生和公共事业领域使用的人工智能技术中引入公平原则,在开发新模型时使用受影响人群的分类数据和代表性人群数据集,监测现有模型中针对歧视和偏见的算法性能,并帮助识别和减少当前系统中的歧视和偏见;
(D)将安全、隐私和可靠标准纳入软件开发生命周期,以保护个人身份信息,包括应对卫生和公共服务领域人工智能应用增加导致的网络安全威胁的措施;
(E)开发、维护和提供必备资料,帮助用户判断如何在本地环境中,适当且安全地使用卫生和公共事业领域中的人工智能;
(F)与州、地方、部落和地区卫生与公共服务机构合作,推动在当地环境中使用人工智能的积极用例和最佳实践;和
(G)确定人工智能的用途,以提高卫生和公共事业领域的工作效率和满意度,包括减轻行政负担。
(ii)在本命令发布之日起180天内,卫生和公共服务部部长应当安排其相关下属部门,与相关机构协商,制定一项战略,以确定卫生和公共服务领域的人工智能技术是否保持在适当的质量水平,包括在适当的情况下,在本条第(b)款第(i)项所述领域。这项工作应当包括制定人工智能保证政策——评估人工智能医疗工具性能的重要方面——以及基础设施需求,以便根据现实世界数据对人工智能医疗技术算法系统的性能进行上市前评估和上市后监督。
(iii)自本命令发布之日起180天内,卫生和公众服务部部长应当与其认为适当的相关机构协商,考虑采取适当行动,促进获得联邦财政支持的卫生和公众卫生服务提供者及时了解和遵守联邦反歧视法律,以及该等法律与人工智能的关系。此类行动可能包括:
(A)召集并向卫生和公众服务提供者及付款人提供技术支持,说明他们在联邦反歧视和隐私保护法律项下承担的人工智能相关义务以及违反该等义务的潜在后果;并
(B)发布指导意见或采取其他适当行动,以回应任何与人工智能有关的、违反联邦反歧视和隐私保护法律的投诉或其他举报。
(iv)自本命令发布之日起365天内,卫生和公众服务部部长应与国防部长和退伍军人事务部长协商,成立人工智能安全项目,与自愿联合的患者安全组织合作,开展以下活动:
(A)为识别和捕捉部署在医疗环境中的人工智能造成的临床错误的方法制定一个通用框架,并就上述人工智能对患者、护理人员或其他方造成伤害(包括通过偏见或歧视)的相关事件制定中央跟踪库规范;
(B)分析获取的数据和生成的证据,在适当的情况下,为避免上述危害,提供建议、最佳实践经验或其他非正式指导;以及
(C)将上述建议、最佳实践经验或其他非正式指导传播给适当的利益相关者,包括医疗卫生服务提供者。
(v)自本命令发布之日起365天内,卫生和公众服务部部长应当制定一项战略,规范在药物开发过程中使用人工智能或人工智能工具的行为。该战略至少应当包括以下内容:
(A)界定在药物开发的每个阶段进行适当监管的目的、目标和深层原则;
(B)明确未来在哪些领域规则制定、指导或其他法定权力可能对实施此类监管制度有必要;
(C)确定建立此类监管体系所需的新型公共/私营伙伴关系的现行预算、资源、人事和潜力;并
(D)考虑执行本命令第4章所采取措施确定的风险。
(i)自本命令发布之日起30天内,交通部部长应当安排非传统和新兴交通技术(NETT)委员会对在交通运输领域使用人工智能的信息、技术支持和指导的需求进行评估。作为该项工作的一部分,交通部部长应当进一步指示非传统和新兴交通技术委员会采取如下措施:
(A)支持现有和未来的人工智能交通相关应用试点倡议,包括在适当时通过技术支持和联合利益相关者,因该等倡议为交通部“创新原则”中阐明的政策优先事项;
(B) 评价上述试点项目的结果,以评估交通部或其他联邦或州政府机构何时有足够的信息采取适当的监管行动,并在信息可用时视情况提出适当的行动建议;以及
(C)建立一个新的交通部跨模式执行工作组,该工作组由交通部下属的不同部门成员组成,负责协调各部门之间的适当工作,以征求和实施合适的利益相关者的相关意见。
(ii)自本命令发布之日起90天内,交通部部长应当安排其相关的联邦咨询委员会就人工智能在运输中的安全和负责任使用提供建议。上述委员会应当包括高级航空咨询委员会、转型交通咨询委员会和智能交通系统项目咨询委员会。
(iii)自本命令发布之日起180天内,交通部部长应当指示高级研究项目局基础设施部(ARPA-I)探索交通相关的人工智能机遇和挑战,包括影响自主移动生态系统的软件定义人工智能增强。交通部部长应当进一步鼓励高级研究项目局基础设施部根据情况优先拨款支持上述机遇。高级研究项目局基础设施部的任务包括通过公开征求意见程序征集对上述主题的意见,例如通过RFI公开征求意见。
(d)为确保教育部门负责任地开发和部署人工智能,教育部部长应当在本命令发布之日起365天内,制定有关人工智能的资源、政策和指导。上述资源应当解决人工智能在教育领域安全、负责任和非歧视性的使用问题,包括人工智能系统对弱势和服务不足社区的影响,且应当是在与合适的利益相关者协商基础上开发。其中还应当包括为指导领导者落实教育部人工智能和未来教学报告中的建议而开发“人工智能工具包”,包括对人工智能决策进行适当的人工审查,设计人工智能系统以增加信任和安全并与教育行业的隐私相关法律法规保持一致,以及制定教育领域的安全防护网。
(e)鼓励联邦通信委员会采取与人工智能影响通信网络和消费者方式有关的行动,包括:
(i)研究人工智能在改善频谱管理、提高非联邦频谱使用效率和扩大非联邦频谱共享机会方面的潜力;
(ii)与国家电信和信息管理局相协调,为联邦和非联邦频谱运营之间共享频谱创造机会;
(iii)支持使用包含人工智能的下一代技术来提高网络安全性、弹性和互操作性,包括自修复网络、6G和开放式无线接入网络(Open RAN);和
(iv)鼓励通过制定规则等方式努力抑制人工智能促使或加剧的无用自动语音电话和自动短信,并部署通过屏蔽无用自动语音电话和自动短信的人工智能技术更好地为消费者服务。
Sec. 9. Protecting Privacy.
(a) To mitigate privacy risks potentially exacerbated by AI — including by AI’s facilitation of the collection or use of information about individuals, or the making of inferences about individuals — the Director of OMB shall:
(i) evaluate and take steps to identify commercially available information (CAI) procured by agencies, particularly CAI that contains personally identifiable information and including CAI procured from data brokers and CAI procured and processed indirectly through vendors, in appropriate agency inventory and reporting processes (other than when it is used for the purposes of national security);
(ii) evaluate, in consultation with the Federal Privacy Council and the Interagency Council on Statistical Policy, agency standards and procedures associated with the collection, processing, maintenance, use, sharing, dissemination, and disposition of CAI that contains personally identifiable information (other than when it is used for the purposes of national security) to inform potential guidance to agencies on ways to mitigate privacy and confidentiality risks from agencies’ activities related to CAI;
(iii) within 180 days of the date of this order, in consultation with the Attorney General, the Assistant to the President for Economic Policy, and the Director of OSTP, issue an RFI to inform potential revisions to guidance to agencies on implementing the privacy provisions of the E-Government Act of 2002 (Public Law 107-347). The RFI shall seek feedback regarding how privacy impact assessments may be more effective at mitigating privacy risks, including those that are further exacerbated by AI; and
(iv) take such steps as are necessary and appropriate, consistent with applicable law, to support and advance the near-term actions and long-term strategy identified through the RFI process, including issuing new or updated guidance or RFIs or consulting other agencies or the Federal Privacy Council.
(b) Within 365 days of the date of this order, to better enable agencies to use PETs to safeguard Americans’ privacy from the potential threats exacerbated by AI, the Secretary of Commerce, acting through the Director of NIST, shall create guidelines for agencies to evaluate the efficacy of differential-privacy-guarantee protections, including for AI. The guidelines shall, at a minimum, describe the significant factors that bear on differential-privacy safeguards and common risks to realizing differential privacy in practice.
(c) To advance research, development, and implementation related to PETs:
(i) Within 120 days of the date of this order, the Director of NSF, in collaboration with the Secretary of Energy, shall fund the creation of a Research Coordination Network (RCN) dedicated to advancing privacy research and, in particular, the development, deployment, and scaling of PETs. The RCN shall serve to enable privacy researchers to share information, coordinate and collaborate in research, and develop standards for the privacy-research community.
(ii) Within 240 days of the date of this order, the Director of NSF shall engage with agencies to identify ongoing work and potential opportunities to incorporate PETs into their operations. The Director of NSF shall, where feasible and appropriate, prioritize research — including efforts to translate research discoveries into practical applications — that encourage the adoption of leading-edge PETs solutions for agencies’ use, including through research engagement through the RCN described in subsection (c)(i) of this section.
(iii) The Director of NSF shall use the results of the United States-United Kingdom PETs Prize Challenge to inform the approaches taken, and opportunities identified, for PETs research and adoption.
第9条 保护隐私
(a)为减轻人工智能可能加剧的隐私风险--包括人工智能促进个人信息的收集或使用,或对个人进行推断--管理和预算办公室主任应当:
(i)评估并采取措施,在适当的机构库存和报告过程中(用于国家安全目的的除外),识别各机构采购的商业可用信息,特别是包含个人身份信息的商业可用信息,包括从数据代理机构采购的商业可用信息和通过供应商间接采购和处理的商业可用信息;
(ii)与联邦隐私委员会和机构间统计政策委员会协商,评估、收集、处理、维护、使用、共享、传播和处置包含个人身份信息的商业可用信息的相关机构标准和程序(用于国家安全目的的除外),就如何减轻机构与商业可用信息相关活动的隐私和保密风险向机构提供潜在指导;
(iii)在本命令发布之日起180天内,与司法部部长、总统经济政策助理和科技政策办公室主任协商,通过公开征求意见方式,公开发布拟对机构实施2002年《电子政府法》(公法107-347)隐私条款的指导意见进行的修订。公开征求意见程序应当寻求公众对隐私影响评估如何更有效地降低隐私风险的反馈,包括人工智能进一步加剧的隐私风险;并
(iv)根据可适用的法律规定,采取必要且适当的措施,支持和推进通过公开征求意见流程确定的近期行动和长期战略,包括发布新的或更新的指导意见或公开征求意见,或者咨询其他机构或联邦隐私委员会。
(b)自本命令发布之日起365天内,为更好地帮助各机构使用隐私增强技术来保护美国人的隐私免受被人工智能加剧的潜在威胁,商务部部长应当通过国家标准与技术研究院院长为各机构制定指南,以评估不同隐私保护措施的有效性,包括对人工智能的保护。指南应当至少阐明影响差异隐私保护的重要因素,以及在实践中出现差异隐私的常见风险。
(c) 为了推进与隐私增强技术相关的研究、开发和应用:
(i)自本命令发布之日起120天内,美国国家科学基金会主席应当与能源部部长合作,资助设立一个研究协调网络(RCN),专门用于推进隐私研究,特别是隐私增强技术的开发、部署和扩展。研究协调网络应当使隐私研究人员能够共享信息,在研究中协调和合作,并为隐私研究行业制定标准。
(ii)自本命令发布之日起240天内,美国国家科学基金会主席应当与各机构合作,明确正在进行的工作以及将隐私增强技术纳入其行动范围的潜在机会。在可行且适当的情况下,美国国家科学基金会主席应当优先开展鼓励采用尖端隐私增强技术解决方案供机构使用的研究,包括将研究发现转化为实际应用的尝试,包括通过本条第(c)款第(i)项所述的研究协调网络进行研究。
(iii)美国国家科学基金会主席应当利用“美国-英国隐私增强技术奖挑战赛”的结果,为隐私增强技术研究和采用了解已使用的方法和确定的机会。
Sec. 10. Advancing Federal Government Use of AI.
10.1. Providing Guidance for AI Management.
(a) To coordinate the use of AI across the Federal Government, within 60 days of the date of this order and on an ongoing basis as necessary, the Director of OMB shall convene and chair an interagency council to coordinate the development and use of AI in agencies’ programs and operations, other than the use of AI in national security systems. The Director of OSTP shall serve as Vice Chair for the interagency council. The interagency council’s membership shall include, at minimum, the heads of the agencies identified in 31 U.S.C. 901(b), the Director of National Intelligence, and other agencies as identified by the Chair. Until agencies designate their permanent Chief AI Officers consistent with the guidance described in subsection 10.1(b) of this section, they shall be represented on the interagency council by an appropriate official at the Assistant Secretary level or equivalent, as determined by the head of each agency.
(b) To provide guidance on Federal Government use of AI, within 150 days of the date of this order and updated periodically thereafter, the Director of OMB, in coordination with the Director of OSTP, and in consultation with the interagency council established in subsection 10.1(a) of this section, shall issue guidance to agencies to strengthen the effective and appropriate use of AI, advance AI innovation, and manage risks from AI in the Federal Government. The Director of OMB’s guidance shall specify, to the extent appropriate and consistent with applicable law:
(i) the requirement to designate at each agency within 60 days of the issuance of the guidance a Chief Artificial Intelligence Officer who shall hold primary responsibility in their agency, in coordination with other responsible officials, for coordinating their agency’s use of AI, promoting AI innovation in their agency, managing risks from their agency’s use of AI, and carrying out the responsibilities described in section 8(c) of Executive Order 13960 of December 3, 2020 (Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government), and section 4(b) of Executive Order 14091;
(ii) the Chief Artificial Intelligence Officers’ roles, responsibilities, seniority, position, and reporting structures;
(iii) for the agencies identified in 31 U.S.C. 901(b), the creation of internal Artificial Intelligence Governance Boards, or other appropriate mechanisms, at each agency within 60 days of the issuance of the guidance to coordinate and govern AI issues through relevant senior leaders from across the agency;
(iv) required minimum risk-management practices for Government uses of AI that impact people’s rights or safety, including, where appropriate, the following practices derived from OSTP’s Blueprint for an AI Bill of Rights and the NIST AI Risk Management Framework: conducting public consultation; assessing data quality; assessing and mitigating disparate impacts and algorithmic discrimination; providing notice of the use of AI; continuously monitoring and evaluating deployed AI; and granting human consideration and remedies for adverse decisions made using AI;
(v) specific Federal Government uses of AI that are presumed by default to impact rights or safety;
(vi) recommendations to agencies to reduce barriers to the responsible use of AI, including barriers related to information technology infrastructure, data, workforce, budgetary restrictions, and cybersecurity processes;
(vii) requirements that agencies identified in 31 U.S.C. 901(b) develop AI strategies and pursue high-impact AI use cases;
(viii) in consultation with the Secretary of Commerce, the Secretary of Homeland Security, and the heads of other appropriate agencies as determined by the Director of OMB, recommendations to agencies regarding:
(A) external testing for AI, including AI red-teaming for generative AI, to be developed in coordination with the Cybersecurity and Infrastructure Security Agency;
(B) testing and safeguards against discriminatory, misleading, inflammatory, unsafe, or deceptive outputs, as well as against producing child sexual abuse material and against producing non-consensual intimate imagery of real individuals (including intimate digital depictions of the body or body parts of an identifiable individual), for generative AI;
(C) reasonable steps to watermark or otherwise label output from generative AI;
(D) application of the mandatory minimum risk-management practices defined under subsection 10.1(b)(iv) of this section to procured AI;
(E) independent evaluation of vendors’ claims concerning both the effectiveness and risk mitigation of their AI offerings;
(F) documentation and oversight of procured AI;
(G) maximizing the value to agencies when relying on contractors to use and enrich Federal Government data for the purposes of AI development and operation;
(H) provision of incentives for the continuous improvement of procured AI; and
(I) training on AI in accordance with the principles set out in this order and in other references related to AI listed herein; and
(ix) requirements for public reporting on compliance with this guidance.
(c) To track agencies’ AI progress, within 60 days of the issuance of the guidance established in subsection 10.1(b) of this section and updated periodically thereafter, the Director of OMB shall develop a method for agencies to track and assess their ability to adopt AI into their programs and operations, manage its risks, and comply with Federal policy on AI. This method should draw on existing related efforts as appropriate and should address, as appropriate and consistent with applicable law, the practices, processes, and capabilities necessary for responsible AI adoption, training, and governance across, at a minimum, the areas of information technology infrastructure, data, workforce, leadership, and risk management.
(d) To assist agencies in implementing the guidance to be established in subsection 10.1(b) of this section:
(i) within 90 days of the issuance of the guidance, the Secretary of Commerce, acting through the Director of NIST, and in coordination with the Director of OMB and the Director of OSTP, shall develop guidelines, tools, and practices to support implementation of the minimum risk-management practices described in subsection 10.1(b)(iv) of this section; and
(ii) within 180 days of the issuance of the guidance, the Director of OMB shall develop an initial means to ensure that agency contracts for the acquisition of AI systems and services align with the guidance described in subsection 10.1(b) of this section and advance the other aims identified in section 7224(d)(1) of the Advancing American AI Act (Public Law 117-263, div. G, title LXXII, subtitle B).
(e) To improve transparency for agencies’ use of AI, the Director of OMB shall, on an annual basis, issue instructions to agencies for the collection, reporting, and publication of agency AI use cases, pursuant to section 7225(a) of the Advancing American AI Act. Through these instructions, the Director shall, as appropriate, expand agencies’ reporting on how they are managing risks from their AI use cases and update or replace the guidance originally established in section 5 of Executive Order 13960.
(f) To advance the responsible and secure use of generative AI in the Federal Government:
(i) As generative AI products become widely available and common in online platforms, agencies are discouraged from imposing broad general bans or blocks on agency use of generative AI. Agencies should instead limit access, as necessary, to specific generative AI services based on specific risk assessments; establish guidelines and limitations on the appropriate use of generative AI; and, with appropriate safeguards in place, provide their personnel and programs with access to secure and reliable generative AI capabilities, at least for the purposes of experimentation and routine tasks that carry a low risk of impacting Americans’ rights. To protect Federal Government information, agencies are also encouraged to employ risk-management practices, such as training their staff on proper use, protection, dissemination, and disposition of Federal information; negotiating appropriate terms of service with vendors; implementing measures designed to ensure compliance with record-keeping, cybersecurity, confidentiality, privacy, and data protection requirements; and deploying other measures to prevent misuse of Federal Government information in generative AI.
(ii) Within 90 days of the date of this order, the Administrator of General Services, in coordination with the Director of OMB, and in consultation with the Federal Secure Cloud Advisory Committee and other relevant agencies as the Administrator of General Services may deem appropriate, shall develop and issue a framework for prioritizing critical and emerging technologies offerings in the Federal Risk and Authorization Management Program authorization process, starting with generative AI offerings that have the primary purpose of providing large language model-based chat interfaces, code-generation and debugging tools, and associated application programming interfaces, as well as prompt-based image generators. This framework shall apply for no less than 2 years from the date of its issuance. Agency Chief Information Officers, Chief Information Security Officers, and authorizing officials are also encouraged to prioritize generative AI and other critical and emerging technologies in granting authorities for agency operation of information technology systems and any other applicable release or oversight processes, using continuous authorizations and approvals wherever feasible.
(iii) Within 180 days of the date of this order, the Director of the Office of Personnel Management (OPM), in coordination with the Director of OMB, shall develop guidance on the use of generative AI for work by the Federal workforce.
(g) Within 30 days of the date of this order, to increase agency investment in AI, the Technology Modernization Board shall consider, as it deems appropriate and consistent with applicable law, prioritizing funding for AI projects for the Technology Modernization Fund for a period of at least 1 year. Agencies are encouraged to submit to the Technology Modernization Fund project funding proposals that include AI — and particularly generative AI — in service of mission delivery.
(h) Within 180 days of the date of this order, to facilitate agencies’ access to commercial AI capabilities, the Administrator of General Services, in coordination with the Director of OMB, and in collaboration with the Secretary of Defense, the Secretary of Homeland Security, the Director of National Intelligence, the Administrator of the National Aeronautics and Space Administration, and the head of any other agency identified by the Administrator of General Services, shall take steps consistent with applicable law to facilitate access to Federal Government-wide acquisition solutions for specified types of AI services and products, such as through the creation of a resource guide or other tools to assist the acquisition workforce. Specified types of AI capabilities shall include generative AI and specialized computing infrastructure.
(i) The initial means, instructions, and guidance issued pursuant to subsections 10.1(a)-(h) of this section shall not apply to AI when it is used as a component of a national security system, which shall be addressed by the proposed National Security Memorandum described in subsection 4.8 of this order.
第10条 推动联邦政府使用人工智能
10.1 制定人工智能管理指南
(a)为协调整个联邦政府对人工智能的使用,在本命令发布之日起60天内,且有必要的持续基础上,管理和预算办公室主任应当召集并主持一个机构间委员会,协调各机构在项目和活动中开发和使用人工智能,而不是在国家安全系统中使用人工智能。科技政策办公室主任应当担任机构间委员会的副主席。机构间委员会的成员应当至少包括《美国法典》第31卷第901条(b)款中规定的机构负责人、国家情报总监以及机构间委员会主席确定的其他机构。在各机构根据本章第10.1条(b)款所述指导意见指定其常设首席人工智能官之前,应由各机构负责人确定的助理部长级或同等级别的合适官员代表首席人工智能官参加机构间委员会。
(b)为指导联邦政府使用人工智能,在本命令发布之日起150天内及此后定期,管理和预算办公室主任应当与科技政策办公室主任协调,并与本章第10.1条第(a)款中规定的机构间委员会协商,向各机构发布指南,加强人工智能的有效和适当使用,推进人工智能创新,并在联邦政府管理人工智能风险。管理和预算办公室主任的指导意见应当在适当且符合法律的范围内规定以下内容:
(i)要求在指南发布后60天内,在每个机构指定一名首席人工智能官,该首席人工智能官应当与其他负责官员协调,在其机构中承担主要责任,协调本机构对人工智能的使用,促进其机构的人工智能创新,管理其机构使用人工智能的风险,履行2020年12月3日第13960号行政命令(《促进在联邦政府中使用值得信赖的人工智能》)第8条(c)款和第14091号行政命令第4条(b)款规定的职责;
(ii)首席人工智能官的角色、职责、资历、职位和汇报机制;
(iii)对于《美国法典》第31卷第901条(b)款中确定的机构,自上述指南发布后60天内,在每个机构设立内部人工智能治理委员会或其他适当机制,通过整个机构的相关高级领导人协调和治理人工智能问题;
(iv)政府使用影响人民权利或安全的人工智能时应当采取的最低风险管理措施,包括在适当的情况下,根据科技政策办公室的《人工智能权利法案蓝图》和国家标准及技术研究院《人工智能风险管理框架》采取的如下措施:进行公众咨询、评估数据质量、评估和减少不同的影响和算法歧视、提供人工智能使用通知、持续监测和评估部署的人工智能,以及准许对使用人工智能作出的不利决定进行人工评判和补救;
(v)被默认为影响权利或安全的、联邦政府对人工智能的具体使用;
(vi)向各机构提出建议,以减少负责任地使用人工智能的障碍,包括与信息技术基础设施、数据、劳动力、预算限制和网络安全流程有关的障碍;
(vii)《美国法典》第31卷第901条(b)款中规定的机构制定人工智能战略和追求高影响力人工智能用例的要求;
(viii)与商务部部长、国土安全部部长以及管理和预算办公室主任认为合适的其他机构的负责人协商,向各机构提出以下建议:
(A)与网络安全和基础设施安全局协调开发的人工智能进行外部测试,包括生成式人工智能的人工智能红队;
(B) 测试和保护生成式人工智能,防止歧视性、误导性、煽动性、不安全或欺骗性的内容,防止制作儿童性虐待素材,防止制作真人的非自愿亲密图像(包括可识别个人身体或身体部位的亲密数字内容);
(C)对生成式人工智能的输出添加水印或以其他方式标记的合理步骤;
(D)将本章第10.1条第(b)款第(iv)项规定的强制性最低风险管理实践应用于所采购的人工智能;
(E)对供应商关于其人工智能产品的有效性和风险降低主张进行独立评估;
(F)人工智能采购文件的编制和监督;
(G)在依赖供应商为人工智能开发和运营目的使用和丰富联邦政府数据时,最大限度地提高各行政机构的价值;
(H)为持续改进所采购的人工智能提供激励;和
(I)根据本命令及人工智能相关其他参考资料所规定的原则,进行人工智能培训;以及
(ix)要求对遵守本指南的情况进行公开报告。
(c)为追踪各机构的人工智能进展,在发布本章第10.1条(b)款所规定、并在此后定期更新的指导意见后60天内,管理和预算办公室主任应当为各机构制定一个方法,以跟踪和评估其将人工智能引入项目和运营、管理其风险,及遵守人工智能相关的联邦政策的情况。该方法应当视情况利用现有的相关工作成果,并应符合相关法律规定,解决负责任的人工智能的应用、培训和治理所需的经验、流程和能力,至少涉及信息技术基础设施、数据、劳动力、领导力以及风险管理。
(d)为协助各机构执行本章第10.1条(b)款中规定的指南:
(i)在指南发布后90天内,商务部部长应当通过国家标准与技术研究院院长,并与管理和预算办公室主任、科技政策办公室主任协调,制定指导方针、工具和管理,以支持落实本章第10.1条(b)款(iv)项所述的最低风险管理实践;和
(ii)在指南发布后180天内,管理和预算办公室主任应当制定一套初步方法,以确保人工智能系统和服务采购机构的合同符合本章第10.1条(b)款所规定的指南的要求,并推动《推进美国人工智能法案》(公法117-263,G部分,第LXXII编,b分编)第7224条(d)款(1)项中确定的其他目标的实现。
(e)为提高机构使用人工智能的透明度,管理和预算办公室主任应当根据《推进美国人工智能法案》第7225条(a)款规定,每年向各机构发布收集、报告和公布各机构人工智能用例的指示。通过这些指示,主任应视情况扩大各机构关于其如何管理人工智能使用风险的报告范围,并更新或取代最初根据第13960号行政命令第5章规定制定的指南。
(f)为推动联邦政府负责任、安全地使用生成式人工智能:
(i)随着生成式人工智能产品在线上平台广泛可用和普遍存在,不鼓励各机构对行政机关使用生成式人工智能实施广泛的一般禁令或阻碍。相反,各机构应当根据具体的风险评估情况,在必要时限制使用特定的生成式人工智能服务;制定适当地使用生成式人工智能的指导方针和限制;并在适当的保障措施到位时,为其人员和项目配备安全可靠的生成式人工智能功能,至少可以被用于影响美国人权利的低风险实验和常规任务。为确保联邦政府的信息安全,还鼓励各机构采取风险管理措施,例如培训其工作人员正确使用、保护、传播和处置联邦信息;与供应商协商约定适当的服务条款;采取能确保遵守记录保存、网络安全、保密、隐私和数据保护要求的措施;并采取其他措施防止在生成式人工智能中滥用联邦政府信息。
(ii)自本命令发布之日起90天内,总务管理局局长协调管理和预算办公室主任,并与联邦安全云咨询委员会和总务管理局认为适当的其他相关机构协商,制定并发布一个框架,用以在联邦风险和授权管理项目的授权过程中优先考虑关键和新兴技术产品,从生成式人工智能产品开始,其主要目的是提供大语言模型聊天界面、代码生成和调试工具以及相关的应用程序编程界面,以及基于提示的图像生成器。该框架自发布之日起至少适用2年。此外,鼓励各机构首席信息官、首席信息安全官和授权官员在授予信息技术系统和任何其他适用发布或监督程序的机构运营权限时,优先考虑生成式人工智能和其他关键和新兴技术,尽可能使用连续授权和批准。
(iii)自本命令发布之日起180天内,联邦人事管理局局长应当与管理和预算办公室主任协调,制定关于联邦工作人员使用生成式人工智能的指导意见。
(g)自本命令发布之日起30天内,为增加各机构对人工智能的投资,技术现代化委员会应当考虑在其认为适当且符合法律规定的情况下,优先为技术现代化基金的人工智能项目提供至少1年的资金支持。鼓励各机构向技术现代化基金项目提交包含人工智能——尤其是生成式人工智能——为任务交付内容的项目资助提案。
(h)在本命令发布之日起180天内,为方便各机构获得商业人工智能资源,总务管理局局长与管理和政策办公室主任协调,并与国防部部长、国土安全部部长、国家情报总监、国家航空航天局局长,以及总务管理局局长认为合适的其他机构的负责人,应当采取合法措施,为特定类型的人工智能服务和产品获得联邦政府范围的采购解决方案提供便利,例如通过创建资源指南或其他工具来协助采购人员。特定类型的人工智能资源应当包括生成式人工智能和专门的计算基础设施。
(i)当人工智能被用作国家安全系统的组成部分时,根据本章第10.1条(a)-(h)款发布的最初方法、指示和指导不适用于人工智能,而应当通过本命令第4.8条中规定的拟议国家安全备忘录来解决。
10.2. Increasing AI Talent in Government.
(a) Within 45 days of the date of this order, to plan a national surge in AI talent in the Federal Government, the Director of OSTP and the Director of OMB, in consultation with the Assistant to the President for National Security Affairs, the Assistant to the President for Economic Policy, the Assistant to the President and Domestic Policy Advisor, and the Assistant to the President and Director of the Gender Policy Council, shall identify priority mission areas for increased Federal Government AI talent, the types of talent that are highest priority to recruit and develop to ensure adequate implementation of this order and use of relevant enforcement and regulatory authorities to address AI risks, and accelerated hiring pathways.
(b) Within 45 days of the date of this order, to coordinate rapid advances in the capacity of the Federal AI workforce, the Assistant to the President and Deputy Chief of Staff for Policy, in coordination with the Director of OSTP and the Director of OMB, and in consultation with the National Cyber Director, shall convene an AI and Technology Talent Task Force, which shall include the Director of OPM, the Director of the General Services Administration’s Technology Transformation Services, a representative from the Chief Human Capital Officers Council, the Assistant to the President for Presidential Personnel, members of appropriate agency technology talent programs, a representative of the Chief Data Officer Council, and a representative of the interagency council convened under subsection 10.1(a) of this section. The Task Force’s purpose shall be to accelerate and track the hiring of AI and AI-enabling talent across the Federal Government, including through the following actions:
(i) within 180 days of the date of this order, tracking and reporting progress to the President on increasing AI capacity across the Federal Government, including submitting to the President a report and recommendations for further increasing capacity;
(ii) identifying and circulating best practices for agencies to attract, hire, retain, train, and empower AI talent, including diversity, inclusion, and accessibility best practices, as well as to plan and budget adequately for AI workforce needs;
(iii) coordinating, in consultation with the Director of OPM, the use of fellowship programs and agency technology-talent programs and human-capital teams to build hiring capabilities, execute hires, and place AI talent to fill staffing gaps; and
(iv) convening a cross-agency forum for ongoing collaboration between AI professionals to share best practices and improve retention.
(c) Within 45 days of the date of this order, to advance existing Federal technology talent programs, the United States Digital Service, Presidential Innovation Fellowship, United States Digital Corps, OPM, and technology talent programs at agencies, with support from the AI and Technology Talent Task Force described in subsection 10.2(b) of this section, as appropriate and permitted by law, shall develop and begin to implement plans to support the rapid recruitment of individuals as part of a Federal Government-wide AI talent surge to accelerate the placement of key AI and AI-enabling talent in high-priority areas and to advance agencies’ data and technology strategies.
(d) To meet the critical hiring need for qualified personnel to execute the initiatives in this order, and to improve Federal hiring practices for AI talent, the Director of OPM, in consultation with the Director of OMB, shall:
(i) within 60 days of the date of this order, conduct an evidence-based review on the need for hiring and workplace flexibility, including Federal Government-wide direct-hire authority for AI and related data-science and technical roles, and, where the Director of OPM finds such authority is appropriate, grant it; this review shall include the following job series at all General Schedule (GS) levels: IT Specialist (2210), Computer Scientist (1550), Computer Engineer (0854), and Program Analyst (0343) focused on AI, and any subsequently developed job series derived from these job series;
(ii) within 60 days of the date of this order, consider authorizing the use of excepted service appointments under 5 C.F.R. 213.3102(i)(3) to address the need for hiring additional staff to implement directives of this order;
(iii) within 90 days of the date of this order, coordinate a pooled-hiring action informed by subject-matter experts and using skills-based assessments to support the recruitment of AI talent across agencies;
(iv) within 120 days of the date of this order, as appropriate and permitted by law, issue guidance for agency application of existing pay flexibilities or incentive pay programs for AI, AI-enabling, and other key technical positions to facilitate appropriate use of current pay incentives;
(v) within 180 days of the date of this order, establish guidance and policy on skills-based, Federal Government-wide hiring of AI, data, and technology talent in order to increase access to those with nontraditional academic backgrounds to Federal AI, data, and technology roles;
(vi) within 180 days of the date of this order, establish an interagency working group, staffed with both human-resources professionals and recruiting technical experts, to facilitate Federal Government-wide hiring of people with AI and other technical skills;
(vii) within 180 days of the date of this order, review existing Executive Core Qualifications (ECQs) for Senior Executive Service (SES) positions informed by data and AI literacy competencies and, within 365 days of the date of this order, implement new ECQs as appropriate in the SES assessment process;
(viii) within 180 days of the date of this order, complete a review of competencies for civil engineers (GS-0810 series) and, if applicable, other related occupations, and make recommendations for ensuring that adequate AI expertise and credentials in these occupations in the Federal Government reflect the increased use of AI in critical infrastructure; and
(ix) work with the Security, Suitability, and Credentialing Performance Accountability Council to assess mechanisms to streamline and accelerate personnel-vetting requirements, as appropriate, to support AI and fields related to other critical and emerging technologies.
(e) To expand the use of special authorities for AI hiring and retention, agencies shall use all appropriate hiring authorities, including Schedule A(r) excepted service hiring and direct-hire authority, as applicable and appropriate, to hire AI talent and AI-enabling talent rapidly. In addition to participating in OPM-led pooled hiring actions, agencies shall collaborate, where appropriate, on agency-led pooled hiring under the Competitive Service Act of 2015 (Public Law 114-137) and other shared hiring. Agencies shall also, where applicable, use existing incentives, pay-setting authorities, and other compensation flexibilities, similar to those used for cyber and information technology positions, for AI and data-science professionals, as well as plain-language job titles, to help recruit and retain these highly skilled professionals. Agencies shall ensure that AI and other related talent needs (such as technology governance and privacy) are reflected in strategic workforce planning and budget formulation.
(f) To facilitate the hiring of data scientists, the Chief Data Officer Council shall develop a position-description library for data scientists (job series 1560) and a hiring guide to support agencies in hiring data scientists.
(g) To help train the Federal workforce on AI issues, the head of each agency shall implement — or increase the availability and use of — AI training and familiarization programs for employees, managers, and leadership in technology as well as relevant policy, managerial, procurement, regulatory, ethical, governance, and legal fields. Such training programs should, for example, empower Federal employees, managers, and leaders to develop and maintain an operating knowledge of emerging AI technologies to assess opportunities to use these technologies to enhance the delivery of services to the public, and to mitigate risks associated with these technologies. Agencies that provide professional-development opportunities, grants, or funds for their staff should take appropriate steps to ensure that employees who do not serve in traditional technical roles, such as policy, managerial, procurement, or legal fields, are nonetheless eligible to receive funding for programs and courses that focus on AI, machine learning, data science, or other related subject areas.
(h) Within 180 days of the date of this order, to address gaps in AI talent for national defense, the Secretary of Defense shall submit a report to the President through the Assistant to the President for National Security Affairs that includes:
(i) recommendations to address challenges in the Department of Defense’s ability to hire certain noncitizens, including at the Science and Technology Reinvention Laboratories;
(ii) recommendations to clarify and streamline processes for accessing classified information for certain noncitizens through Limited Access Authorization at Department of Defense laboratories;
(iii) recommendations for the appropriate use of enlistment authority under 10 U.S.C. 504(b)(2) for experts in AI and other critical and emerging technologies; and
(iv) recommendations for the Department of Defense and the Department of Homeland Security to work together to enhance the use of appropriate authorities for the retention of certain noncitizens of vital importance to national security by the Department of Defense and the Department of Homeland Security.
10.2 增加政府中的人工智能人才
(a)在本命令发布之日起45天内,为规划在全国范围内迅速增加联邦政府中的人工智能人才,科技政策办公室主任和管理和预算办公室主任应当与总统国家安全事务助理、总统经济政策助理、总统助理、国内政策顾问和总统助理兼性别政策委员会主任协商,明确增加联邦政府人工智能人才的优先任务领域,最优先招聘和培养的人才类型,确保充分执行该命令,并运用相关执法和监管机构来应对人工智能风险和加速招聘。
(b)自本命令发布之日起45天内,为协调快速提升联邦政府人工智能工作人员的能力,总统助理兼政策副幕僚长应当与科技政策办公室主任和管理和预算办公室主任协调,并与国家网络总监协商,召集一个人工智能和技术人才工作组,其中应包括人事管理局局长、总务管理局技术转型服务局局长、首席人力资本官委员会代表、总统人事助理、相关机构技术人才项目成员、首席数据官委员会代表,以及根据本章第10.1条(a)款规定召集的机构间委员会代表。该工作组的目标是加快和跟踪联邦政府人工智能和人工智能赋能人才的招聘工作,包括采取以下行动:
(i)在本命令发布之日起180天内,跟踪并向总统报告联邦政府提高人工智能能力的进展情况,包括就如何进一步提高上述能力而向总统提交报告和建议;
(ii)确定和传播各机构吸引、雇佣、留住、培训和提升人工智能人才的最佳实践,包括多样性、包容性和无障碍方面的最佳实践经验,并为人工智能人才需求进行充分规划和预算;
(iii)与人事管理局局长协商,协调使用研究生奖学金项目、机构技术人才项目和人力资本团队,培养招聘能力、实施招聘并安置人工智能人才以填补人才缺口;和
(iv)召开一个跨机构论坛,促进人工智能专业人员之间的持续合作,分享最佳实践经验并提高人才留用率。
(c)自本命令发布之日起45天内,在法律允许的情况下,在本章第10.2条(b)款所述人工智能和技术人才工作组的支持下,推进现行联邦技术人才项目、美国数字服务局、总统创新奖学金、美国数字军团、人事管理办公室和各机构的技术人才项目,制定并开始实施支持快速招聘员工的项目,作为增加联邦政府范围内人工智能人才工作的一部分,以加速将关键人工智能和人工智能赋能人才安置在高优先级领域,并推动机构的数据和技术战略。
(d)为满足合格人员的关键招聘需求并执行本命令中的举措,改进联邦人工智能人才的招聘状况,人事管理局局长应当与管理和预算办公室主任协商采取如下措施:
(i)自本命令发布之日起60天内,对招聘和工作场所灵活性的需要进行循证审查,包括联邦政府范围内人工智能和相关数据科学技术职位的直接招聘权限,如果人事管理局局长认为该权限合适,则授予该权限;上述审查应当包括以下所有通用计划(GS)级别的工作系列:即专注于人工智能的信息技术专家(2210)、计算机科学家(1550)、计算机工程师(0854)和程序分析师(0343),以及上述工作系列衍生的任何后续开发的工作系列;
(ii)自本命令发布之日起60天内,考虑根据《联邦法规汇编》第5卷第213章第3102条(i)款(3)项规定授权使用例外服务任命,以满足雇佣额外员工执行本命令项下指令的需要;
(iii)自本命令发布之日起90天内,开展主题专家告知的联合招聘行动,并使用基于技能的评估来支持跨机构招聘人工智能人才;
(iv)自本命令发布之日起120天内,在法律允许的情况下,就各机构应当利用人工智能、人工智能赋能和其他关键技术职位的薪酬灵活性或薪酬激励项目发布指导意见,推动适当地运用现有的薪酬激励机制;
(v)自本命令发布之日起180天内,制定以技能为基础的人才招录指南和政策,在联邦政府范围内招聘人工智能、数据和技术人才,以增加非传统学术背景的人获得联邦人工智能、数字和技术职位的机会;
(vi)自本命令发布之日起180天内,成立一个机构间工作组,配备人力资源专业人员和招聘技术专家,推动在联邦政府范围内招聘具有人工智能和其他技术技能的人员;
(vii)自本命令发布之日起180天内,根据数据和人工智能素养,审核现有的高级别公务(SES)职位所需的核心资质(ECQs),并在本命令发出之日起365天内,在高级别公务职位评估过程中适时按新的核心资质审核;
(viii)自本命令发布之日起180天内,完成对土木工程师(GS-0810系列)以及其他相关职业(如适用)的能力审查并提出建议,确保在联邦政府的此类职位任职的公务员中有足够的人具备人工智能专业知识和证书,以反映人工智能在关键基础设施中的使用增加;并
(ix)与安全、适当和资格审查问责委员会合作,对各机制进行评估,酌情精简和加快人员审查要求,以支持人工智能及与其他关键和新兴技术相关的领域。
(e)为扩大对人工智能人才招聘和挽留特别权限的使用,各机构应当运用一切适当的招聘权限,包括附表A(r)项除外的服务招聘和直接招聘权限,视情况快速招聘人工智能人才和人工智能赋能人才。除了参与人事部领导的联合招聘活动外,各机构还应在适当时根据2015年《竞争服务法》(公法114-137),开展机构领导的联合雇佣和其他共享招聘合作。各机构也应当适时使用现有的激励措施、薪酬制定权限和其他薪酬灵活性,类似于网络和信息技术职位、人工智能和数据科学专业人员以及通俗易懂的职位,帮助招聘和留住这些高技能的专业人员。各机构应确保在劳动力战略规划和预算制定中反映人工智能和其他相关人才需求(如技术治理和隐私)。
(f)为方便招聘数据科学家,首席数据官委员会应当开发一个数据科学家职位描述库(职位序列1560),并为支持各机构招聘数据科学家制定招聘指南。
(g)为帮助联邦政府工作人员就人工智能问题进行培训,每个机构的负责人都应当为员工、管理人员和技术领导层以及相关政策、管理、采购、监管、伦理、治理和法律领域人员开展人工智能培训和熟悉项目,或增加其可用性和使用率。例如,此类培训项目应当授权联邦政府职员、管理人员和领导人员增加和保持新兴人工智能技术的操作知识,以评估使用这些技术提升公共服务水平的机会,并降低与这些技术相关的风险。为员工提供专业发展机会、补助金或资金的机构应采取适当措施,确保不担任传统技术职务(如政策、管理、采购或法律领域)的员工仍有资格获得人工智能、机器学习、数据科学或其他相关主要领域的资助。
(h)自本命令发布之日起180天内,为解决国防人工智能人才缺口,国防部部长应当通过总统国家安全事务助理向总统提交一份报告,报告内容应包括:
(i)就如何解决国防部雇佣特定非美国公民人才方面的能力挑战提供建议,包括通过科学和技术创新实验室雇佣;
(ii)通过国防部实验室的有限访问授权,澄清和简化特定非公民雇员访问机密信息的流程的建议;
(iii)根据《美国法典》第10卷第504条(b)款第(2)项规定,为人工智能及其他关键和新兴技术专家适当享有入伍权限的建议;以及
(iv)建议国防部和国土安全部共同努力,加强行使适当的权力,由国防部和土地安全部留住对国家安全至关重要的特定非公民人才。
(后接第(五)部分第11-13条)