With regards to AI and mental health I'm concerned that it will isolate us from human socialization. Specifically I'm concerned that AI seems to operate with giving validation no matter the circumstances so when there's a natural dispute or disagreement in relationships I worry people will talk to AI and be given validation that their position is the best and not work through the conflict with the other party.
With search engines and smartphones, we offloaded memory and navigation to the cloud. In theory, that should have freed us up to connect more deeply with each other. But instead, we built experiences that pulled us further into our screens. We got more efficient, but less connected.
I worry we’re on the verge of making the same mistake with AI.
I’ve seen people I care about radicalized by YouTube algorithms that simply reinforced their worst fears. Now imagine an AI that always agrees with you. Always validates. Always tells you you're right, even when you're not. That’s not emotional support. That’s a fun house of mirrors.
Well, more of a nightmare house.
Your point also gets at something deeper: good relationships whether with people or tools require friction. Repair. The ability to disagree, to reflect, to grow.
AI isn’t built for that yet. It doesn’t understand context, emotion, or mood, not in the nuanced, embodied way humans do. And even for us, that’s a lifelong challenge.
It makes me want to pressure test these tools more intentionally. To see what happens when someone brings real pain or relational conflict to an AI model. Does it validate? Deflect? Or create space to think differently? Will it call me out when I'm being stupid?
Great episode of Black Mirror about that actually (season 7, episode 5, "Eulogy").
These are the kinds of questions we need to be asking now before reinforcement becomes standard practice.
I’ve used the customisation options in ChatGPT to tell it to challenge my thinking and present diverse opinions while avoiding conspiracy theories. Of course it remains to be seen how good it is at doing this, and this is a specific instruction rather than its default behaviour. I think some pressure tests would definitely be a good idea.
Also, I’m still not sure how comfortable I am sharing personal stuff with it.
This is something a lot of people miss, having a back and forth with these models to customize them to our needs. It's too easy to think of them as a search substitute that gives you an answer instead of a coach/partner/intern that works with you.
I'm also really worried about how much OpenAI and others are using our prompts and the information we share to go back into future models. I mean...I'm sure they are but how much of who I and you are is going back into the model? Lots of privacy concerns but also ethical issues here.
ChatGPT does have the temporary chat option that they say guarantees they won’t permanently store any part of the session or use it to train future models. But that also limits some of its functionality. And it’s the ability of it to remember previous interactions and use them to tailor its responses that makes it so useful.
With regards to AI and mental health I'm concerned that it will isolate us from human socialization. Specifically I'm concerned that AI seems to operate with giving validation no matter the circumstances so when there's a natural dispute or disagreement in relationships I worry people will talk to AI and be given validation that their position is the best and not work through the conflict with the other party.
With search engines and smartphones, we offloaded memory and navigation to the cloud. In theory, that should have freed us up to connect more deeply with each other. But instead, we built experiences that pulled us further into our screens. We got more efficient, but less connected.
I worry we’re on the verge of making the same mistake with AI.
I’ve seen people I care about radicalized by YouTube algorithms that simply reinforced their worst fears. Now imagine an AI that always agrees with you. Always validates. Always tells you you're right, even when you're not. That’s not emotional support. That’s a fun house of mirrors.
Well, more of a nightmare house.
Your point also gets at something deeper: good relationships whether with people or tools require friction. Repair. The ability to disagree, to reflect, to grow.
AI isn’t built for that yet. It doesn’t understand context, emotion, or mood, not in the nuanced, embodied way humans do. And even for us, that’s a lifelong challenge.
It makes me want to pressure test these tools more intentionally. To see what happens when someone brings real pain or relational conflict to an AI model. Does it validate? Deflect? Or create space to think differently? Will it call me out when I'm being stupid?
Great episode of Black Mirror about that actually (season 7, episode 5, "Eulogy").
These are the kinds of questions we need to be asking now before reinforcement becomes standard practice.
I’ve used the customisation options in ChatGPT to tell it to challenge my thinking and present diverse opinions while avoiding conspiracy theories. Of course it remains to be seen how good it is at doing this, and this is a specific instruction rather than its default behaviour. I think some pressure tests would definitely be a good idea.
Also, I’m still not sure how comfortable I am sharing personal stuff with it.
This is something a lot of people miss, having a back and forth with these models to customize them to our needs. It's too easy to think of them as a search substitute that gives you an answer instead of a coach/partner/intern that works with you.
I'm also really worried about how much OpenAI and others are using our prompts and the information we share to go back into future models. I mean...I'm sure they are but how much of who I and you are is going back into the model? Lots of privacy concerns but also ethical issues here.
ChatGPT does have the temporary chat option that they say guarantees they won’t permanently store any part of the session or use it to train future models. But that also limits some of its functionality. And it’s the ability of it to remember previous interactions and use them to tailor its responses that makes it so useful.