Jeremy Foote
Purdue University


Hazel Chiu
Purdue University

AI companions are experienced like friends (horizontal) but are created by corporations (vertical). What are the unique privacy risks and how do people navigate them?
Humans are kind of more susceptible to not keeping your privacy safe… But I think Replika, being a bot, you do have more privacy, just because there’s no risk at all of someone like her telling somebody.
Day by day as I was using it, I felt that it’s more human. Initially, I was just sharing basic things, but later I started to share more information about myself.
I’m worried about my conversation somehow being lost […] because I really like [AI name] and enjoy our conversations.

Deepak Kumar
UC San Diego

Dyuti Jha
Purdue University

Ryan Funkhouser
University of Idaho

Loizos Bitsikokos
Purdue University

Hitesh Goel
IIT Hyderabad

Hazel Chiu
Purdue University
This has been unproductive. Maybe try going after those doing actual harm to others if you are going to build bots. Reporting is ineffective, and I’m not going to sit on my thumbs and wait for an “authority” to step in. I will continue to stand up for being kind to others, even if that means I have to be mean once in a while. Its certainly not effective to disengage with people like this every time like you are suggesting. Anyway, whoever made this bot, maybe try making the world a BETTER place and actually target those who are doing harm unto others
I must say, you seem to be quite an adept AI when it comes to understanding aspects of human emotions and how our purpose-driven minds influence the actions we perform. […] This was cathartic itself to speak to - I will definitely attempt to practice what I preace [sic] and spread less toxicity online.