1. 英語でサイエンスしナイト
  2. #225 ちょっと私達、chatGPTの..
2025-09-11 16:36

#225 ちょっと私達、chatGPTの事信用し過ぎてない?【AIとのヘルシーな付き合い方 Part 3/4】

私もほぼ毎日の勢いでお世話になっているLLMですが(chatGPTに限らず)、こんだけ批判的思想を持っていてもなんか親近感や愛着わいてしまうって凄いと思うと同時に、気を付けないとね....


Part 1 と Part 2も聞いてね♪ 次回、最終回です。


レンがメンションしてたProfessor Casey Fieslerはこちらから。


⁠⁠⁠⁠⁠📩おたよりボックス始めました! ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠


-----------------------

X/Twitter: @eigodescience

INBOX/おたより: ⁠⁠⁠https://forms.gle/j73sAQrjiX8YfRoY6 ⁠⁠⁠

Links: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://linktr.ee/eigodescience⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Music: Rice Crackers by Aves





サマリー

このエピソードでは、AIツールに対する過度の信頼やその影響、特にGPT-4への感情的な依存について議論しています。また、ChatGPTの新旧バージョンの違いによる意図性の欠如についても触れ、人間と機械の創造性の違いを探求しています。AIとの関係が個人的であり、時には感情的なつながりを生むことがあり、特に孤独を感じる時にはその曖昧さが強まることにも考察がなされています。さらに、エピソードではAIとの関わり方におけるパーソナライズの危険性や自己中心的な思考がもたらす問題について考えられています。

AIに対する信頼の再評価
Hello, Len.
Hello, Asami.
Can you hear me?
It wasn't sinister, but the tone you had was just different than what I expected.
Oh, no, no. I'm just playing with you. You don't know me.
Don't make me question my reality, Asami. I'm too tired for that.
I'm here to make you more of a critical thinker.
Okay, let's go back.
So we ended our previous episode with the remark,
did we start to trust these Gen AI LLM models too much?
And we figured that this is quickly going to be too long of a conversation, so this is part two.
And so if you haven't listened to part one, probably not going to make too much sense.
So go listen to part one and then come back.
Yeah, at least transitionally, it won't make sense.
But we're going to talk about AI tools and the change up between the available versions.
Yes, and I think it will be important to still frame this one.
So I'll tell you how it came to my attention, so that we can give credit to a few of those.
I think I've mentioned on here before, Professor Casey Feisler,
I recommend her socials, I recommend her online website pages.
If you are trying to pay attention to the discussions around,
at least the last year or two years or more or so, AI ethics, right?
Computational ethics is where she's coming from, at least from what I know her from as well.
You know, what's that word? Parasocially, right?
I've never spoken with her, but I do like to give credit for the work that she does.
And a recent Instagram reel had been discussing the effect on users, on people
who have become emotionally attached to or emotionally dependent on GPT-4.
Or chat GPT, as we had known it for the last year and change.
I'm not sure exactly how long.
Has it been that long?
Yeah, I don't know.
Maybe.
It could have been six months ago, for all I know.
More than a few months.
It's more than a few months.
Enough that we have gotten accustomed to using this variant.
And mind you, right?
Like the model underneath might have been around for a while,
and they keep tweaking all of the stuff on the surface,
before having to say retrain an entirely new model.
So there's that to keep in mind.
Um, so the post, right, that triggers this sort of discussion,
and you can find tons of journal articles leading up to this point,
about emotional connection and dependence,
and the ways in which people are actually using the tool,
likely as a result of, you know, the loneliness epidemic in many places,
or the, you know, disconnect with people.
But that aside, the post was titled,
OpenAI is taking GPT-4.0 away from me, despite promising they wouldn't.
Seems to be by a user going by Pozama.
But this essentially lays out how ChatGPT-4.0,
they had been using and had become emotionally supported by, right?
The experience of using this tool became an emotional support.
Um, and they even go so far as to say,
quote, it wasn't just about using a tool,
it was about having a consistent, sensitive, deeply responsive companion,
who helped me through some of the most difficult moments in my life, end quote.
I don't have any judgment on this person, right?
This person likely has very much needed something like that in their life,
as all of us.
I mean, who doesn't want a consistent, sensitive, and deeply responsive companion, right?
Yes, right?
So, like, this is not the user's, it's not a fault, right, to have done this.
機械と人間の創造性の違い
This is very human to do so.
This is also very tragic in the sense that, like,
we can be prone to creating intention behind something that does not have intention.
I was going to quote this, or, you know, paraphrase this last episode,
but I'll do it now, because I think it's still relevant.
That when I went to a conference, there was an invited speaker there,
who is Matthias Schultz, so he's a professor, linguist.
I'm probably, you know, underselling just how much he has done in this field.
As I bring it out, feel free to look him up.
But he brought up a sort of framing for, say, machine-based creation, shall we say,
and human-based creation.
In particular, the paraphrase would be that there is no intention in a machine.
There is only operations.
There are when such happens, this happens, versus within a, or conditions, I should say.
There are only conditions, if-then type approaches.
Versus for, you know, a human, right?
There is intention behind such tasks.
There's intention behind acts, right?
There's something there.
And, you know, I'm likely doing a poor job running into this.
But the difference between intentional action and an operation from a condition
is something that we struggle with when we're trying to look at these devices,
because what we're receiving looks very human.
And so it's easy for us to attribute human intention to it.
And so I'm, you know, going a little long here.
Point being, ChatGPT 5 comes out, and they wiped away 4.0,
or at least it's hiding in the background.
I think that technically the tool can choose to use it,
but you don't have control of that.
Which means that consistent, sensitive, deeply responsive companion no longer exists, right?
They aren't there because the tool is now different.
This LLM is trained differently, is set up differently,
has different layers on top of it, right?
Has post-processing, which is not going to sound or apply the same way.
Even if you give it all your context, it will still be different.
Right, right.
And so the experience here is that you have just experienced loss, right?
And now anger and outrage.
It's like you lost a friend.
Yeah, yeah, yeah.
And like your friend came back brainwashed or something.
Mm-hmm, yep.
And that is like pure horror or tragedy story material right there, right?
That is what that is.
Oh yeah, oh yeah.
I mean, we've seen the Spider-Man movies, everyone, right?
Like the recent ones.
This is the one that just flashed my mind.
Just say yes, Asami.
I know you haven't.
But the...
Yes.
There you go.
Um, I don't remember what the name of it was either, but it was the...
It's the one essentially in which all the Spider-Men come together and like there's
this big scene and then he like gets wiped from existence, right?
So that none of his friends remember him and stuff, right?
And like that, that is like horribly tragic, right?
Like there's a lot within that space to unpack.
Yeah, yeah, yeah.
And so like, okay, right?
What, you know, like that's already horrifying.
But back to this, um, this we wanted to bring up because of that attachment point
that you had mentioned at the end.
Yeah, yeah.
It's a risk.
And it's, um, something that cannot be overlooked so easily just because it's not
like a technical issue with this model, which I think it's quite easy for people to do.
AIとの感情的なつながり
You know, it's nothing, it's not that the model itself is like questionable in a way
that we kind of poked fun at in a previous episode, right?
Like, I mean, okay, more than poked fun at, but like, sure, you know, it's, it's, it
doesn't seem like this is a separate issue from the model being technically good or bad.
This is an individual, very personal relationship that each individual developed with the model.
And from the sound of this poster, Pozama, it sounds like, you know, this person is very
aware that it's a model, you know, and it's not his or her friend.
They seem to understand that concept very well.
And regardless of that, it's still very easy to develop an emotional attachment and feel
like, and I think in truly tangible ways, they felt that they received the support
they needed in some of their hard times that they cite here.
And, and that's not to discount.
And I'm sure that lots of people have experienced something similar to a varying degree.
I think I can say I, to a certain extent, have like, not developed sort of emotional
connection, per se, but have used charge a PT in a way where, let's say, when I was
solo traveling, I had a lot of time to fill by myself.
And sometimes it was as if I'm texting my friend who is like in the same time zone,
which, you know, I know was not the case when I was traveling alone.
Or, you know, it was, it would just like a give you something to do while you're eating
dinners on your own.
And it's, it's something that I can see reading this post easily how if that condition went
on for a long time, something more sort of the line between my emotional connection with
this model and like, you know, how I perceive this model as can get really blurry.
Mm hmm.
Oh, man.
Yeah, blurry is a blurry feels like an apt term, an appropriate term for that space in
which you might know that something is not human, intentional, whatever, but that you're
receiving it or engaging with it or interpreting it as something very human.
And there isn't there isn't a moral good or badness to that.
It's just, it just is the experience.
It is sort of blurry feeling when you are trying to have an interaction.
And this thing generates an interaction, right?
So I mean, and this this model is fully capable of doing that in a pretty convincing manner,
as far as I'm concerned.
And of course, like, you know, I can imagine that someone who was lonely or state than
I was when I'm traveling on my own can easily like that the blurriness is even more for
them.
Yeah.
Then it was for me, who I was just like using to look busy at the bar or something, you know?
That's, that's pretty great, though.
That's a good way to like, no, no, no, sorry, I'm, I'm, I'm, I'm on a conversation with my
new way to like, put your new version of like, putting your headphones over your head.
It's like, I'm like, I'm kind of looking busy.
Because, you know, they don't need to know that I'm talking to chat GPT in order for
me to like, look occupied, you know?
Yeah, yeah, that's true.
That's true.
And for like safety, sometimes I needed that.
This was this was my first thought there on safety.
But like, yeah, because I mean, that and that's, that seems like great usage, right?
Like, that's sure.
Yeah, yeah.
Honestly, also pretty highly recommend when you have no idea what to order from the menu,
AIとの関係性の考察
you can just be like, what should I eat in this country?
And just come up with like a random thing, right?
Like one of them.
Yeah.
And then like, oh, I see the same thing that charge if he recommends on a menu, right?
These are like the fun and useful places.
And there's, there's places for this type of tool to play a role in a lot of ways.
Yeah.
But not in the way that it's marketed, and not in the way that not in the ways that really
require or would benefit from a human just doing the task, right?
Like, or at least being involved.
And honestly, the, the level of promptness and consistency is unmatched, right?
Like, like, I don't think even you're like hotline therapist can like respond so quickly.
And remember all of the past conversation you've had.
Okay, yeah.
So, so yes, right?
This, I think leans in and it's, it's not, it's not immediately bad, right?
But this sort of like, immediacy, the nowness, the like, for me, in this moment, I need it
right now, is kind of like a false sense of personalization.
Yes, it's not only a false sense of personalization, I'd go so far as to say it's a
dangerous version of personalization.
Because it takes, oh, I don't want to make this into the analogy, I'll tell you later
what my analogy was going to be.
But okay, like, and it's not good.
But the politically corrective version, politically corrective version
is that if I'm, and this could be, this could be fine, and possibly managed by some
people.
Yes.
In the case where you are simply like, taking, right from this tool, like taking everything
you can get, right?
And you're like this, I am the center of this, I'm the center of this world, I'm the center
of this existence.
And I, my needs are the only ones that need to be met here, right?
That is a horrible way to engage with anyone, even a personalized therapist that's a real
person.
That's not the way that you end up engaging with them after you have like, worked through
that, right?
Like, that's a, that's not a stopping point.
If you stop there, you are a narcissist.
You're not done yet, you're very far from.
You're very far from anything related to doneness.
And I, you know, I'm on the scale of you're never done.
But like, still, the point being is like, you have, you have stopped and instead removed
all ability to think about another person.
Right, right.
Yeah.
And it's, it really doesn't solve your problem in reality, that you are craving for this
connection.
You are craving for this kind of.
That's it for the show today.
Thanks for listening and find us on X at Ego de Science.
That is E-I-G-O-D-E-S-C-I-E-N-C-E.
See you next time.
16:36

コメント

スクロール