1. 英語でサイエンスしナイト
  2. #226 AIと感情の取り扱い方っ..
2025-09-15 20:18

#226 AIと感情の取り扱い方ってどう考えてる?【AIとのヘルシーな付き合い方 Part 4/4】

後半でメンションしているカズオ・イシグロ作のクララとお日さま(早川書房)は可愛いタイトルの割にダークな話ですが、めっちゃ考えさせられるので長くないし、おすすめ。イシグロらしく、前半の展開はスローですが語彙は平易なものが多い(子供が主役だし)なので英語と合わせて読んでみてもいいかも!


⁠⁠⁠⁠⁠⁠📩おたよりボックス始めました! ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠


-----------------------

X/Twitter: @eigodescience

INBOX/おたより: ⁠⁠⁠⁠https://forms.gle/j73sAQrjiX8YfRoY6 ⁠⁠⁠⁠

Links: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://linktr.ee/eigodescience⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Music: Rice Crackers by Aves





サマリー

AIと感情の関係について議論され、特に孤独感やつながりの欲求に対するAIの役割と限界が強調されます。AIとの感情的な関わり方が論じられ、特に子供への影響と教育政策が追いつかない現状が考察されます。AIの感情への影響とその取り扱い方について、特に子供たちの発達に与える影響が探求されます。このエピソードでは、AIとの健康的な関係構築と感情の重要性について議論され、カズオ・イシグロの著作を通じた読書の可能性も紹介されます。AIとの関係性や感情の管理について、どのようにバランスを取るべきかが考えられます。

AIと孤独感の関係
Right, yeah, and it really doesn't solve your problem in reality that you are craving for this connection.
You are craving for this kind of immediacy and responsiveness and responsive support in this way.
It just does not solve the problem. I think what's confusing is that it does solve some part of that problem.
It does give you a temporary solace and it's so easy to confuse that with an actual profound
reaching point where you do start to manage consciously your sense of loneliness,
sense of craving for connections, that type of thing. And it could be a good starting point,
but it's just too dangerously confusing the way it is set up right now. It's very
worrisome in some cases. Your observation of how it can play that role, it is
actually affecting or perhaps in some cases even benefiting those that are experiencing this type
of loneliness, distress, all these other types of situations. There is of course the other end where
there are people being hurt by this as well. It's not going well. They're getting convinced that
conspiracy theories will drive them out of business and life. But this gets to, I think, two pieces
and mixed from Professor Casey Feisler and from the other discussions. One being this
generalized tool, the company that is designing it is kind of holding all of the ethical
responsibility for all of these things, but they're in no way trained for this.
No, no.
And so that itself is a massive problem and sort of feeds some of these issues because
if you have a ... There have been automated, not AI necessarily, but automated help therapists.
Eliza was the first one, not our friend. But it was just if then statements. It was if anything
the baseline of operation then condition. And it could work because sometimes it's your reflection
that matters. But these tools don't- It gives you that cognitive distances between
what's happening in your head and putting it out there. And many times that little bit of distance
is all you need to sort of meta-analyze, put yourself out of the situation, give you a
different perspective. And that's all you need to sort of settle down for that moment.
Yes. And there's a usefulness to it, but the company isn't ready to do those things.
And in the case-
They're in no way equipped.
Right. And they're not equipped for it. And in the case of the uses,
it can work, but you need to have it designed for this. Otherwise, you're basically hunting down
safety features for every conceivable problem, every conceivable use. And that is-
Which is not ... Yeah.
Yeah. And so this, I will add this as well because- I guess I have to log in. Hold on.
I forget which account I have this under. If people want more in terms of this thought of it
helping, you'll find some of that as well. The New York Times, there was an opinion piece that-
Which apparently I've logged into the wrong account for. One sec here.
Take your time.
There's an opinion piece that is written by a therapist who stated that it is, quote,
eerily effective. And yes, maybe as a trained therapist, that might be true, but certainly not
for someone who is not aware of these things and does not have the cognitive presence to ask
questions that you might to take it as introspection, let's say, in a way that is
take it as introspection, like you might to walk through the thinking process and the emotional
space, right, that it did. And so my- I have yet to fully read this here, but based off what
that premise is, I would hope that it is trying to get at this point of maybe there's a possibility,
but they- I'm not sure that they take into account just how useful it is for them to be a therapist
using this, right? Like that is not something you can just bypass. Now, I'll scan this maybe and
see if there's any particular piece that stands out to me, right? But yeah, for instance,
AIと感情の関係性
I mean, super scant, please go read this yourself. Please critique us, listeners, you know all that.
A quote from here near the end of the article mentions, quote, but when it slipped into
fabricated error, it being the AI, but when it slipped into fabricated error or a misinformed
conclusion about my emotional state, I would slam it back into place. Just a machine, I reminded
myself, a mirror, yes, but one that can distort. Its reflections could be useful, but only if I
stayed grounded in my own judgment, end quote. That's the key, right? That's the key. And,
and frequently, when you are in a state of vulnerability, you don't always have that
solid grounding to make a judgment for yourself, which is why this type of use is super dangerous,
as you said, and yet it's so accessible. It's accessible to like children who are yet to even
have a proper sense of self to begin with. And, and open AI alone, definitely cannot
undertake all of the responsibilities, even though they feel like it feels like
they should, but there's no way they can be accountable for every single possible responsible
situations. And frankly, like this, the usage and, and the spread of this model is way over
what they're capable of responsibly managing. And, and I, and yet I think it's very easy for us to
forget that they're, they're definitely biting way more than they can chew when it comes to this
model. But it's already spread. So it's kind of up to us users to like, try our best to engage in
sort of grounded manner, which again, much, much easier said than done. But that's like the only
line of defense we have at this minute. Right, exactly. Yeah, like this kind of like critical
engagement, and I think it's going to be like, hopefully, right, like, I don't know, in soon-ish
future rather than too late. I hope that this, you know, if, if we continue to sort of build our,
you know, humanities around coexisting with Gen AI, it's kind, I hope that this becomes kind of
the stuff that's in like elementary school syllabus, right, of like how to engage with not
just like internet literacy, you know, like which users today still need a lot of literacy education
for, we don't have enough of it. But now we need like an added layer of how do we interact with
generative models? And, and, and yeah, you know, knowing how fast education policy changes, which
is way too slow. It's gonna take a while, it's gonna take a while. So the best we can do is
really just us educating ourselves. I, you so the viewers can't see this. But you know, you probably
saw me dragging in my eyes. Your face was Yeah, saying everything. So I would like to add to this.
And I don't want to, I don't want to quote them directly, because I'm not sure if they'd like me
to share it. But I, I speak with somebody who is very, like aware of kind of all of the new
innovations and the things going on. And he attempts to use a lot of these AI models at like
their full criteria, you know, he sort of switches between the expensive, oh, he pays for the
subscriptions, he's got, he's got time, and he's got some money to do some of that. And it keeps
him sort of being like, what is it right that exists in these models? And we're really how far
can you push them for certain tasks? And that's useful as far as an exploration goes.
Even from him. He is like, this is not a it's not a thing that you want to have at like elementary
schools at middle schools, high schools that like maybe ish territory, right? Like doing it that
early is hyper dangerous. Oh, yeah. Oh, yeah. Like, very dangerous. And it might it might
right become more normal. And there might have this probably gonna have to be adjustments to
this. But thinking about it, like, you know, the closest thing to a personification would be
AIとの感情的関係
stuffed animals, imaginary friends, the characters in a book that you're reading,
right? These types of things are social relationships, social relationships. Yes,
right. Very good. Yeah, that's like the thing is you're watching, you know, the something as well.
And, you know, oh, this person is my favourite. And I like them so much. Yeah. It's like, you
don't really know them, right? But right. But it takes a toll on a toll and possibly a support
on the psyche. Mm hmm. Yet to have something which is like, able to interact with you in a
way that is so not by their definition of deceptive, but so deceptively real and intentional.
That is going to be real hard to separate for a child and maybe continue to be hard. I'm not sure
I don't know, right? How we as a person grow from that, right? If you start off as this is,
this is my friend, thing, right, using an AI model. And then later, you're like talking with,
you know, all of the many customer service chatbots. And you're thinking that this is like a
real intentional being behind that scene. What will that do to you? You know, not just that,
but like to your human relationships? More importantly? Right, right. Yeah, yeah, yeah.
It's, it's making me think of Clara and the Sun by Kazuo Ishiguro. And when I read it,
when the book came out, it really didn't like, hit me much. To be honest, I was just like, Oh,
okay. But, you know, it's been a few years and tragedy PT came around. And now it's like holding
a whole new impact. And I think I should go back and reread that book. Because, yeah, it really
explores this like emotional and developmental part of children's growth and what having an
intimate interaction with an AI does to them. Again, like, I'm not saying that he figured it
out. I'm just saying that he has a imaginary exploration that he, you know, put it into words.
And yeah, these are sort of interesting things to think about. And I certainly, you know,
like when I say it might be the something that there should be in elementary school syllabus
is like, you know, I'm not saying like, it's not it's not like my professional recommendation,
by any means, like, I'm not qualified for it. But I, I meant it as in like something as ubiquitous.
Something that people learn before they fully launch themselves into adulthood.
Right, right. Yeah, no, I, I 100% agree with you on, I would like to keep it out,
right from the early stages. But as a consequence of its pervasiveness, sorry, I had a momentary
pause where I realized I am just using so many higher level words today. I apologize to the
listeners, because I have become energetic about the topic. Please, please feel free to roast me
in the inbox. I promise not to read any of them. So, it's okay, it's okay. The, you know, you're,
you're, you're a smart guy, you use big words. I smart guy, I use big word. So the,
that pervasiveness, though, means that that likely will be the case, right? I mean,
from the direction that I come at it, that I try to do at this time is, like, we should not be
turning away from it, because it is there. And so if you choose to exclude it from places like a
classroom, that can be fine. But you should still engage on dialogue around it, because people are
going, your students are going to ask, people are going to be using it, it will be a question.
It takes time, that's energy, that's effort, right? And I, I certainly don't think that every
person, or every teacher, or every individual needs to have all of this knowledge. But there
should be these people aiming to share it. Exactly. And I think your, your classroom
will serve as one of the safe spaces to explore and experiment for students to, like, wrestle with
an idea. It's, it's kind of, in a way, like a modern day philosophy class, right? Of thinking
AIと感情の重要性
about how to think about things. And, and, and, in many ways, this is a very human activity,
trying to understand that thinking process. And what's better place to do it than in a safe,
contained environment of a school? And where you are allowed to fail, you are allowed to explore
and make mistakes, and, you know, whatnot. So, like, in that sense, like, your, your work is,
in so many other ways, but in this particular sense, like, extremely valuable and, like,
essential. That's, wow. I mean, I feel, I feel like this turned into a praising session of my
work at the end. But hey, guys, Len was feeling a little down. And this whole hour of conversation
was a pep talk. Oh, it's so great. It's all for me. Everything is for me. Sorry, in case that's
not clear. That was the whole point. That's deep sarcasm for everyone listening. That is,
we still need to come up with a sarcasm sound effect. I know. We don't know yet. No, we don't
know. We'll have to come up with it. I, so I think we've closed this one out. But I would like to
pause it to you, because you mentioned rereading Clara in the Sun. Should we consider or maybe
foreshadow this as a possible book club option, instead of just talking about Babel again?
You know, normally, I don't like people telling me what to read. But this, this might, you know,
serve as a good discussion point. And I think with you, we can have a pretty interesting discussion.
So yeah, maybe we can do that. Yeah, we'll toss it into the ether. We'll put it on the list.
We'll see if we can pick up the book and go from there. So maybe listeners look forward to that.
If you really want it, send it in the inbox. And I promise I will have Asami look at it.
No, I'm kidding. Yeah, and the Japanese translation is available.
So oh, that's actually that's fantastic, right? Because this is by Kazuo Ishiguro, right?
Yeah, well, Kazuo Ishiguro, I don't know how fluent he is in Japanese, but he's because he's
like, British naturalized citizen. Ah, right. Of Japanese descent. I think it's because I've seen
both a bunch of his books in like the libraries here. So I just assumed many.
Yeah, yeah, yeah. But like his, I think almost all of his books are translated in
to Japanese and many other languages.
Okay, okay. Nice. Then yeah, let's definitely do that. I think I've,
I've only read a collection of his short stories, Nocturnes, which I found very thought provoking.
And, right, he wrote the Unconsoled, which was, I don't remember if I spoke about that here,
I'd have to refresh my memory. Is this the one that I maybe actually, I'm not sure if it was
that one. There's there's been a few of his that I, I have scanned. So I'm certainly interested.
So let's never let me go. Let's see if we never let me go. I think was was one. No,
was one that I've also been recommended, right? So I read one that doesn't stand out as much,
which was When We Were Orphans. And this one, I, at least to myself,
basically, like had to spend an hour unpacking after I finished the book.
Right? Like what just happened?
Yeah, because I had a lot of thoughts on like, what the author's intention was versus like,
what sort of the story was kind of getting at and the implications behind what. And I don't
remember all of them now. But it definitely gave me pause at the end. So yeah, yeah, I wouldn't
mind. Maybe this time I'll read it in Japanese and see what it's like.
Okay. All right. Yeah, we can we can definitely do that. All right, good. Well, we're done for
now. Everybody remember to treat your fellow humans with real feeling. And remember that
intention matters in everything that you do. Yes. All right. I use big words.
That's it for the show today. Thanks for listening and find us on x at
Eigo de Science. That is E-I-G-O-D-E-S-C-I-E-N-C-E. See you next time.
20:18

コメント

スクロール