Academic Publication Developing trustworthy artificial intelligence: insights from research on interpersonal, human-automation, and human-AI trust
Research Abstract & Technology Focus
The rapid advancement of research-highlight">artificial research-highlight">intelligence (AI) has impacted society in many aspects. Alongside this progress, concerns such as privacy violation, discriminatory bias, and safety risks have also surfaced, highlighting the need for the development of ethical, responsible, and socially beneficial AI. In response, the concept of research-highlight">trustworthy AI has gained prominence, and several guidelines for research-highlight">developing research-highlight">trustworthy AI have been proposed. Against this background, we demonstrate the significance of psychological research in identifying factors that contribute to the formation of trust in AI. Specifically, we review research findings on interpersonal, human-automation, and human-AI trust from the perspective of a three-dimension framework (i.e., the trustor, the trustee, and their interactive context). The framework synthesizes common factors related to trust formation and maintenance across different trust types. These factors point out the foundational requirements for building research-highlight">trustworthy AI and provide pivotal guidance for its development that also involves communication, education, and training for users. We conclude by discussing how the research-highlight">insights in trust research can help enhance AI’s trustworthiness and foster its adoption and application.
Commercial Realization
Startups and Open Source tools heavily associated with the concepts explored in this paper.
-
Product HuntMetabase Data Studio
Market Trends