2,301 | 4 | 92 |
下载次数 | 被引频次 | 阅读次数 |
人工智能伦理审查是防范人工智能伦理风险的重要手段,需要引起我们的高度重视。目前学术界与科技企业已经开始关注并实施伦理审查,但具体规范与标准尚处于探索阶段。人工智能科研人员的自我伦理审查、同行审查与机构审查均存在不少挑战,影响与制约着人工智能伦理审查的顺利开展及其功能的正常发挥。我们应该针对人工智能伦理审查的核心问题,通过强化自我审查、完善同行审查与推动机构审查等多种途径,建构较为完善的人工智能伦理审查体系,推动其发挥应用的积极作用,进而促进人工智能健康发展。
Abstract:Ethics review of artificial intelligence is an important means to prevent ethical risks in artificial intelligence. At present,academia and technology companies have begun to pay attention to and implement ethics review, while norms and standards are still in the exploratory stage. There are many challenges in the self-ethics review, peer review, and institutional review of artificial intelligence,which constrain the implementation and functionality of artificial intelligence ethics review. We should focus on the core issues of ethics review in artificial intelligence, and strengthen self review, improve peer review, and promote institutional review, in order to construct a comprehensive system of ethics review in artificial intelligence, thereby promoting the development of artificial intelligence.
[1] SRIKUMAR M, FINLAY R, ABUHAMAD G, et al.Advancing Ethics Review Practices in AI Research[J].Nature Machine Intelligence, 2022, 4(12):1061-1064.
[2] MOON M. The History and Role of Institutional Review Boards[J]. American Medical Association Journal of Ethics, 2009, 11(4):311-321.
[3]周吉银,李红英,杨阳.人工智能医疗器械的伦理审查要点[J].医学与哲学,2020(6):35-39.
[4]吉萍,郭锐,许卫,等.医疗人工智能产品研发的伦理审查与法律考量[J].医学与哲学,2020(5):15-18.
[5] BIRHANE A, KALLURI P, CARD D, et al. The values encoded in machine learning research[C]//Isbell C. et al.(eds.). FAccT, 22. New York:ACM, 2022:173-184.
[6] Partnership on AI. Managing the Risks of AI Research[EB/OL].(2021-05-06)[2023-09-24]. https://partnershiponai. org/paper/responsible-publication-recom mendations/.
[7] ASHURST C, HINE E, SEDILLE P, et al. AI ethics statements[C]//Isbell C. et al.(eds.). FAccT, 22.New York:ACM, 2022:2047-2056.
[8] NANAYAKKARA P, HULLMAN J, DIAKOPOULOS N.Unpacking the expressed consequences of AI research in broader impact statements[C]//Fourcade M. et al(eds). AIES, 21. New York:ACM, 2021:795-806.
[9] SPAAPEN J, DROOGE L. Introducing productive interactions in social impact assessment[J]. Research Evaluation, 2011, 20(3):211-218.
[10] TRETKOFF E. NSF’ s “broader impacts” criterion gets mixed reviews[EB/OL].(2007-06-01)[2023-09-24]. https://www. aps. org/publications/apsnews/200706/nsf. cfm.
[11] KEITH-SPIEGEL P, KOOCHER G, TABACHNICK B.What Scientists Want from Their Research Ethics Committee[J]. Journal of Empirical Research on Human Research Ethics, 2006, 1(1):67-82.
[12] BURNETT S, FEAMSTER N. Encore:Lightweight Measurement of Web Censorship with Cross-Origin Requests[C]//UHLIG S, et al(eds.). SIGCOMM’15. New York:ACM, 2015:653-667.
[13] BOZEMAN B, BOARDMAN C. Broad Impacts and Narrow Perspectives:Passing the Buck on Science and Social Impacts[J]. Social Epistemology, 2009, 23(3-4):183-198.
[14]李建军,王添.科研机构伦理审查机制设置的历史动因及现实运行中的问题[J].自然辩证法研究,2022(3):51-57.
[15]孟令宇,王迎春.探索人工智能伦理审查新范式[J].科学与社会,2023(4):97-113.
[16] BERNSTEIN M, LEVI M, MAGNUS D, et al. Ethics and society review:Ethics reflection as a precondition to research funding[J]. Proceedings of the National Academy of Sciences, 2021, 118(52):1-8.
[17]迪格纳姆.负责任的人工智能何以可能?[M].谢蓉蓉,陈明晰,程国建,译.上海:上海交通大学出版社,2023:84.
[18] LIN H, BALCAN M, HADSELL R, et al. What we learned from NeurIPS 2020 reviewing process[EB/OL].(2020-11-01)[2023-09-24]. https://neuripsconf.medium. com/what-we-learned-from-neurips-2020-reviewing-process-e24549eea38f.
[19] HECHT B, WILCOX L, BIGHAM J, et al. It’s time to do something:Mitigating the negative impacts of computing through a change to the peer review process[EB/OL].(2021-12-17)[2023-09-24]. https://arxiv. org/abs/2112. 09544.
[20] KNIGHT S, SHIBANI A, VINCENT M. Ethical AI governance:Mapping a research ecosystem[J]. AI and Ethics, 2024.
[21]邱仁宗,翟晓梅.在国际背景下我国伦理审查的能力建设:理念和实践[J].中国医学伦理学,2008(2):3-5.
基本信息:
DOI:10.19883/j.1009-9034.2024.0124
中图分类号:B82-057;TP18
引用信息:
[1]杜严勇.人工智能伦理审查:现状、挑战与出路[J].东华大学学报(社会科学版),2024,24(02):32-39.DOI:10.19883/j.1009-9034.2024.0124.
基金信息:
2020年度国家社会科学基金重大项目“人工智能伦理风险防范研究”(项目编号:20&ZD041)