Sunday, February 23, 2025

There is cultural variability in attitudes toward chatbots mediated by differences in anthropomorphism.

Folk, D. P., Wu, C., & Heine, S. J. (2025). Cultural Variation in Attitudes Toward Social Chatbots. Journal of Cross-Cultural Psychology, 0(0). https://doi.org/10.1177/00220221251317950

Abstract: Across two studies (Total N = 1,659), we found evidence for cultural differences in attitudes toward socially bonding with conversational AI. In Study 1 (N = 675), university students with an East Asian cultural background expected to enjoy a hypothetical conversation with a chatbot (vs. human) more than students with European background. Moreover, they were less uncomfortable and more approving of a hypothetical situation where someone else socially connected with a chatbot (vs. human) than the students with a European background. In Study 2 (preregistered; N = 984), we found similar evidence for cultural differences comparing samples of Chinese and Japanese adults currently living in East Asia to adults currently living in the United States. Critically, these cultural differences were explained by East Asian participants increased propensity to anthropomorphize technology. Overall, our findings suggest there is cultural variability in attitudes toward chatbots and that these differences are mediated by differences in anthropomorphism.

From the introduction:

Hundreds of millions of people all over the world have used conversational artificial intelligence (AI) such as ChatGPT (Hu, 2023; Zhou et al., 2020). While conversational AI (or chatbots for short) can be used to answer search queries and increase productivity (Fauzi et al., 2023; Surameery & Shakor, 2023), a growing number of people are using chatbots specifically designed to provide emotional connection (Blakely, 2023; Clarke, 2023; Metz, 2020). These social chatbots, as well as other forms of social robots, are particularly popular in East Asia (Technavio, 2023; Yam et al., 2023; Zhou et al., 2020). Indeed, the Chinese social chatbot Xiaoice has had over 600 million registered users since its release in 2014 (Zhou et al., 2020), and social robots in Japan are already caring for the elderly (Lufkin, 2020) and providing companionship as pets (Craft, 2022).

Yet, despite increased popularity in these countries, there is conflicting evidence for the idea that East Asians harbor more favorable attitudes toward social robots than Westerners (see Lim et al., 2021 for a review). For example, Bartneck et al. (2006) used the Negative Attitudes Toward Robots Scale and found that Americans held more positive views toward robots (vs. Japanese), but another study found that specific components of the robots design determined which culture held more positive impressions (Bartneck, 2008). Critically, however, most of this research is severely underpowered, limiting the conclusions that can be drawn (Lim et al., 2021).

1 comment:

  1. As you highlight Bill, chatbot usage heading toward a billion users. Wow. Snowcrash territory.

    We are living in a natural experiment.
    How do I leave the lab?
    No way out.

    This US 'when the government was a government' important study has / is will be quashed by the increasingly deranged US "government"... choose your own word.

    The FTC has been DOGe'd - censored censorship! Of an "inquiry to understand “how consumers have been harmed […] by technology platforms that limit users’ ability to share their ideas or affiliations freely and openly”. 

    FTC Link returns:
    "NOTICE: The FTC website is currently unavailable. Thank you for your patience while we work to restore service."
    https://www.ftc.gov/news-events/news/press-releases/2025/02/federal-trade-commission-launches-inquiry-tech-censorship

    ...to allow...

    "Erotica, gore and racism: how America’s war on ‘ideological bias’ is letting AI off the leash"
    ...
    "On February 20, the US Federal Trade Commission announced an inquiry to understand “how consumers have been harmed […] by technology platforms that limit users’ ability to share their ideas or affiliations freely and openly”. Introducing the inquiry, the commission said platforms with internal processes to suppress unsafe content “may have violated the law”.

    "The latest version of the Elon Musk–owned Grok model already serves up “based” opinions, and features an “unhinged mode” that is “intended to be objectionable, inappropriate, and offensive”. Recent ChatGPT updates allow the bot to produce “erotica and gore”.

    "These developments come after moves by US President Donald Trump to deregulate AI systems. Trump’s attempt to remove “ideological bias” from AI may see the return of rogue behaviour that AI developers have been working hard to suppress."
    ...
    Published: February 24, 2025
    https://theconversation.com/erotica-gore-and-racism-how-americas-war-on-ideological-bias-is-letting-ai-off-the-leash-250060

    The cultural study "There is cultural variability in attitudes toward chatbots mediated by differences in anthropomorphism" has only 306 Americans and ... "Exclusions. We reached our final sample of 360 American participants by recruiting 401 participant and excluding 41 of them for failing our comprehension check (as preregistered)."
    Failing "our comprehension check" dumos 10%! This may mean poor comprehension may see more using chatbot / not / influened easily.

    Binaries... ??? Verses?
    "Unnerved Feelings About the interaction
    "American Versus Chinese Participants"
    "American Versus Japanese Participants"

    The "Independent variable: Culture" it looks as though all models scenarios set to "0 = American 1 = Chinese". Haven't the time, nor will I bother for this study, to work out yet 1 vs 0 seems... culture on? Off? All the same?

    "There is cultural variability in attitudes toward chatbots mediated by differences in anthropomorphism."
    Well, yes, and a ton of other age, education, etc variables too.

    Much more education and study needed. Glad you are on it Bill. Thanks.

    ReplyDelete