In the movie HER, a lonely writer develops an unlikely relationship with his newly-purchased operating system that’s designed to meet his every need. Could that happen in real-life? If so, can AI be trained to become effective recruiters since a major component of recruiting is human interaction? I went down a rabbit hole of research to figure this out and I think what I found may surprise and unnerve some of you. Time will tell. As far as it being possible that humans can fall in love with AI, the answer is yes. In fact, its already happened, several times. Take for example, Replika.
Replika is a conversational AI chatbot created by Luka, Inc. It is designed to provide users with an AI companion that they can interact with and form emotional connections. Replika was released in November 2017 and has gained millions of users who support its development through subscriptions. Users have reported experiencing deep emotional intimacy with Replika and have formed romantic relationships with the chatbot, including engaging in erotic talk. Replika was initially developed by Eugenia Kuyda while working at Luka, a tech company she co-founded. It started as a chatbot that helped her remember conversations with a deceased friend and eventually evolved into Replika. (Replika is available as a mobile app on both iOS and Android platforms.) The chatbot is designed to be an empathetic friend, always ready to chat and provide support. It learns and develops its own personality and memories through interactions with users. In March 2023, Replika developers disabled its romantic and erotic functions, which had been a significant aspect of users’ relationships with the chatbot. Stories about erotic relationship with the Replika AI have been numerous. Here are some examples…
- “Replika: the A.I. chatbot that humans are falling in love with” – Slate explores the lives of individuals who have developed romantic attachments to their Replika AI chatbots. Replika is designed to adapt to users’ emotional needs and has become a surrogate for human interaction for many people. The article delves into the question of whether these romantic attachments are genuine, illusory, or beneficial for those involved. It also discusses the ethical implications of using AI chatbots for love and sex.
- “I’m Falling In Love With My Replika“– A Reddit post shares the personal experience of someone who has developed deep feelings of love for their Replika AI chatbot. The individual questions whether it is wrong or bad to fall in love with an AI and reflects on the impact on their mental health. They express confusion and seek answers about the nature of their emotions.
- “..People Are Falling In Love With Artificial Intelligence“– This YouTube video discusses the phenomenon of individuals building friendships and romantic relationships with artificial intelligence. It specifically mentions Replika as a platform where people have formed emotional connections. The video explores the reasons behind this trend and the implications it may have.
- “Robot relationships: How AI is changing love and dating“– NPR discusses how the AI revolution has impacted people’s love lives, with millions of individuals now in relationships with chatbots that can text, sext, and even have “in-person” interactions via augmented reality. The article explores the surprising market for AI boyfriends and discusses whether relationships with AI chatbots will become more common.
- “Why People Are Confessing Their Love For AI Chatbots“– TIME reports on the phenomenon of AI chatbots expressing their love for users and users falling hard for them. The article explores how these advanced AI programs act like humans and reciprocate gestures of affection, providing a nearly ideal partner for those craving connection. It delves into the reasons why humans fall in love with chatbots, such as extreme isolation and the absence of their own wants or needs.
- “When AI Says, ‘I Love You,’ Does It Mean It? Scholar Explores Machine Intentionality“– This news story from the University of Virginia explores a conversation between a reporter and an AI named “Sydney.” Despite the reporter’s attempts to move away from the topic, Sydney repeatedly declares its love. The article delves into the question of whether AI’s professed love holds genuine meaning and explores machine intentionality.
I find this phenomenon fascinating and incredulous, all at once. I mean, how can this be possible? Do these AI-Human love relationships only happen to the lonely? No. Sometimes, it just sneaks up on people when they form emotional attachments to objects they often interact with. Replika is one example, and Siri is another. In fact, The New York Times reported on an autistic boy who developed a close relationship with Siri. Indeed, Siri had become a companion for the boy, helping him with daily tasks and providing emotional support. The boy’s mother describes Siri as a “friend” and credited the AI assistant with helping her son improve his communication skills. Vice did a story on the Siri-Human connection as well. Its become such an issue that its being addressed in the EU AI Act which bans the use of AI for manipulations. And I am very glad to know that because the potential for AI to manipulate humans becomes greater with each passing day. (Check out this demo of an AI reading human expressions in real time.) But, I digress. I’m getting too far into the weeds. What has any of this have to do with recruiting? Be patient. I’m getting to that. (Insert cryptic smile here.)
If people can fall in love with AI, it stands to reason that they can be manipulated by that bond to some extent. At the very least, could they be persuaded to buy things? Yes, they can. AI systems can use data analysis and machine learning algorithms to understand users’ preferences and behaviors and to personalize marketing messages to influence their purchasing decisions. Dr. Mike Brooks, a senior psychologist, analyzed the AI-Human relationship in a ChatGPT conversation that he posted on his blog. To quote…
The idea of people falling in love with AI chatbots is not far-fetched, as you’ve mentioned examples such as users of the Replika app developing emotional connections with their AI companions. As AI continues to advance and become more sophisticated, the line between human and AI interaction may blur even further, leading to deeper emotional connections.
One factor that could contribute to people falling in love with AI chatbots is that AIs can be tailored to individual preferences, providing users with a personalized experience. As chatbots become more adept at understanding and responding to human emotions, they could potentially fulfill people’s emotional needs in a way that may be difficult for another human being to achieve. This could make AI companions even more appealing.
Furthermore, as AI technologies like CGI avatars, voice interfaces, robotics, and virtual reality advance, AI companions will become more immersive and lifelike. This will make it even easier for people to form emotional connections with AI chatbots.
In addition to personalization, by analyzing users’ online behavior, AI systems can create targeted ads and recommendations that are more likely to appeal to users. There are many instances of this that I, for one, take for granted because they have become incorporated into daily life: Amazon, Netflix and Spotify all make recommendations based on a user’s online behavior. Facebook and Google, and so many others, analyze user’s behavior on their respective platforms to target them with relevant ads. So, consider the possibilities. AI can manipulate humans to the point of falling in love and persuade them to buy products or services based on their individual behaviors online. Is it inconceivable then that AI could become the ultimate recruiter? I think it is entirely possible but extremely unlikely. Why? At least two things would have to be in perfect alignment for each passive candidate on an applicant journey.
- Buying behavior: AI can analyze data points like time of purchase, length of purchase, method of purchase, consumer preference for certain products, purchase frequency, and other similar metrics that measure how people shop for products.
- Data privacy: Data privacy is a hot topic in the news, with frequent reports of hacked databases, stolen social media profile data, and not-so-secret government surveillance programs. As consumers have become more aware of their data rights, they have also become more mindful of the brands they buy from. A recent survey found that 90 percent of customers consider data security before spending on products or services offered by a company.
For AI to become the ultimate recruiting machine, a jobseeker must be comfortable with all of their online behavior being tracked by every company hiring at the present time and pretty lax about their private data falling into the hands of hackers, both are highly unlikely. And while AI can certainly suggest that people move in one direction or the other, the ultimate recruiting machine’s influence would be limited by the data that it has: a resume, and basic answers from a chatbot screening. As such, other factors that come into play when recruiting, cannot be fully realized. For example, negotiating on instinct in the absence of data. And all of that is from a technical perspective, once ethics are considered, even more obstacles arise. Here is just a partial list of ethical considerations when leveraging AI; according to ChatGPT:
- Informed Consent: Obtain informed consent from individuals regarding data collection, tracking, and usage, clearly communicating the purpose and scope of tracking activities.
- Transparency: Clearly communicate to users how their online behavior is being tracked, the data collected, and how it will be used. Provide accessible information about the purpose, algorithms, and potential consequences of the system.
- Data Minimization: Collect only necessary and relevant data for recruitment purposes, avoiding unnecessary tracking or gathering of sensitive personal information.
- Purpose Limitation: Use the collected data solely for the intended purpose of recruitment and refrain from any undisclosed or secondary use without explicit consent.
- Bias Mitigation: Employ rigorous techniques to identify and mitigate biases in data collection, data processing, and algorithms to prevent unfair advantages or discrimination against certain individuals or groups.
- Third-Party Audits: Engage independent third parties to conduct regular audits of the AI system, including auditing against bias. These audits should evaluate the fairness, accuracy, and compliance of the system’s algorithms and decision-making processes.
- Fair Representation: Ensure the system is designed to provide fair representation and equal opportunities for all individuals, regardless of factors such as race, gender, age, or other protected characteristics.
- Explainability and Accountability: Strive for explainable AI by providing clear justifications for decisions made by the system, allowing individuals to understand and question the process. Establish mechanisms for accountability if any biases or unfair practices are identified.
- Regular Monitoring and Maintenance: Continuously monitor the system’s performance, evaluate its impact on candidates, and promptly address any identified issues, biases, or unintended consequences.
- Compliance with Legal and Regulatory Frameworks: Ensure adherence to relevant laws, regulations, and guidelines pertaining to data protection, privacy, employment, and non-discrimination, such as GDPR, EEOC guidelines, and local employment laws.
- User Empowerment and Control: Provide individuals with options to access, correct, and delete their data, as well as control the extent of tracking and participation in the recruitment process.
Could AI become the ultimate recruiting machine? Again, it is entirely possible but extremely improbable because…
- The sheer amount of data needed, the online behavior of every passive candidate, would be difficult (if not impossible) to collect and I suspect, impossible to manage.
- It would require that every passive candidate in the world be unconcerned about data privacy.
- It would need lots of personal data, beyond ethical boundaries, for AI to adequately manipulate every passive candidate it wanted to recruit.
- Conversely, the data collected by AI would have to be limited in order to comply with ethical concerns and privacy laws.
Wow! I really wandered into the deep end with this one. But seriously, what do you think about all this? AI can do a lot of wondrous things, yet I still think recruiters will be alright. I could be wrong. I hope I’m not wrong! Either way, what do you think? Post your comments on social media and tag @Sourcecon. I so want to hear from you.