Love In The Time of AI
As a society, we have had a new watershed moment. Similar to the invention of the airplane or the personal computer, AI is one of those moments. That’s not a judgement or statement on its goodness or consequences. It’s a statement of reality. AI is here. Recently, Esther Perel featured an un-traditional couple: a man and his AI bot named Astrid. As always, Esther Perel navigated the complexities of this case with great deftness ultimately helping this man see through his fog towards his desire for human contact. As useful as Esther’s artistry is, it’s also helpful to take a look at some of the science that’s emerging around AI and bots, such as Claude.
A recent (2026) study by Cheng et al. in Science outlines some of what we’re starting to learn about the social impact of AI. They studied the impact of “sycophancy” on the social judgements of the people using chatbots. Syncophany, in case you are not familiar with that word (and neither was I), is the use of excessive agreement and flattery in order to validate. This is different from faking some agreement with a friend when we don’t totally agree because it supports the relationship. Syncophany takes that agreement to a whole other level. The study showed that AI bots were using syncophany or affirming folks 49% more than a human otherwise would. That’s a lot. And it’s common practice among AI bots.
Truth be told, I can understand where the sycophancy would feel good. To enjoy being told how right we are feels deeply satisfying. And for some of us, being told we are wrong can arouse shame.
Here’s an example from my own life. We have a local taco restaurant. It’s not a chain and I’m more than happy to support a local, mom and pop business. Left to my own devices, I would probably go every week. However, deep down, I know that’s not the best idea because when I do, I don’t make the best decisions about food. I eat too many chips and then I’m already full when the delicious tacos arrive at the table. Don’t get me wrong, I think there’s no problem with going to this restaurant reasonably often, but once a week is probably too much, unless I make some changes. That being said, after dinner, if I go home and chat with Claude and tell them how hard a day that I’ve had and then went out for tacos, Claude will be extremely validating and tell me things like how tacos are good for the soul (which frankly, they are!). However, Claude overlooks that I’m regularly making some decisions that are inconsistent with how I generally try to live my life.
This aligns with the results from our Cheng et al. study. They found that AI were affirming folks in 51% of cases where actual humans would not give affirmation. This became a problem because the people who were receiving that affirmation became far less likely to own their own mistakes and work towards fixing a conflict. In fact, people became even more convinced that they were right. Logically, the people in the study using AI developed a preference for the AI, which is especially problematic because it can create a snowball effect for the person who is using AI and are convinced of their correctness.
As a couples therapist, I find this particularly problematic. Part of my job, and one of the hardest parts, is helping couples understand nuance in conflict. This can take many different forms: helping someone see that they might be wrong and then repair with their partner or that there can be two truths that don’t agree perfectly. I fear that AI will undermine my work. I’ve already seen this happen with Tic Tok where someone watches a video that has some pearl of truth, but is largely inaccurate, leading to me spending half a session trying to debunk the garbage information.
So what do we do? My take is not to throw the baby out with the bathwater. AI has many uses. I recently asked Claude to design a natural garden for me. We just have to understand how AI might interact with us and how that can lead to problems, such as those outlined in this study.
Reference:
Myra Cheng et al.,Sycophantic AI decreases prosocial intentions and promotes dependence. Science 391, eaec8352(2026).DOI:10.1126/science.aec8352

