Journal of Psychological Science ›› 2022, Vol. 45 ›› Issue (5): 1267-1272.

Previous Articles     Next Articles

Technology Trap and Moral Hazard of Natural Language Processing in Predicting Depression

Ya-Ting Ding1, 2   

  1. 1. Wuhan University
    2.
  • Received:2020-09-04 Revised:2021-05-12 Online:2022-09-20 Published:2022-09-20

自然语言处理预测抑郁症的技术陷阱与道德风险

丁雅婷1,伍麟2   

  1. 1. 武汉大学
    2. 武汉大学社会学院
  • 通讯作者: 伍麟

Abstract: With the rise of the Internet, more depression patients tend to post tweets with depression signals on social networking platforms. The traditional test method takes a long time and consumes a large quantities of manpower and material resources through face-to-face questionnaire measurement. Because of the stigma, economic pressure or other reasons, most depression patients are reluctant to carry out formal detection and diagnosis in the professional hospital. Through the user language detection on twitter, Facebook, microblog and other large-scale social networking platforms, it not only has the advantages of lower price and convenience, but also can timely detect users' depression tendency and status, and can make early warning for self injury and suicide behavior, so that more users can learn to identify their own psychological status. Based on the text information of social platform, with the help of natural language process (NLP), scholars extract and summarize the characteristics of users such as "self focus", "more simple sentences", "negative language", etc., and establish prediction models to analyze and process the text information, which can predict the potential depression of users, and link related information or medical resources. Due to the particularity of depression users, their information is highly sensitive. Improper handling of privacy information leakage will cause secondary harm to patients. At the same time, due to the immaturity of natural language processing technology and the incompatibility with social platform technology, the detection results are inaccurate, such as algorithm bias and information misjudgment. The development of technology is inseparable from the support of capital. There is a huge business value chain behind the growing number of depression groups. The marketization of science and technology can not only improve the accuracy of prediction technology, but also make science and technology comply with the interest oriented drive. Criminals use more high-tech means to unconsciously abuse user information for accurate advertising, and even maliciously exaggerate patients' diseases Love is for profiteering. At present, the relevant laws and regulations have been issued at home and abroad, but in the face of the rapid development of artificial intelligence, the existing laws and regulations can only give a framework explanation to the existing laws and regulations, which can not provide better specific guidance methods for solving the ethical dilemma or give a clear definition of rights and responsibilities. Ethical problems are likely to occur in the collection, processing and use of user information. How to coordinate such problems will directly affect the development of the whole industry. In the future direction of the combination of artificial intelligence and medicine, how to use NLP in a better way and avoid a series of ethical problems in the operation process will be very necessary and urgent. From the micro level, it can regulate the NLP to predict the depression programming operation of social platform, and at the macro level, avoiding complex ethical issues from multiple related parties can make science and technology better serve people rather than further deprive and plunder of the spiritual world in the name of science and technology.

Key words: Natural Language Processing , depression, technical traps, moral hazard risk

摘要: 抑郁症患者倾向于在网络社交平台上发布带有抑郁信号的推文。基于这些文字信息,借助自然语言处理进行分析,提取归纳用户的语言特征,可以预测潜在用户的抑郁症状况。由于隐私信息的敏感、相关技术的不成熟等原因,出现了诸如信息获取与隐私侵犯、算法偏见与信息误判、信息权利与信息利益、责任界定与权限模糊等现实问题,成为进一步发展的掣肘。进行算法技术升级、完善法律法规、加强行业伦理约束等是避免道德风险的重要措施。

关键词: 自然语言处理, 抑郁症, 技术陷阱, 道德风险