00:13
So that's how I use ChatGPT, like making them do really mundane tasks.
And I think that so far, that's the most useful way of using ChatGPT.
And the bonus is that, yeah, let's say you just get tired going back and forth with ChatGPT, right?
You can just suddenly confuse them with asking them really existential questions.
Like, I don't know.
You can ask them, like, what does spirit mean?
Or like, what is the fundamental truth?
I feel like, yeah.
What is the meaning of life?
You're positing questions of which, in sci-fi novels, when you get an answer to is when people usually panic.
You can ask them, do you love me? And see what comes back.
There's a reason that this is GPT 3.5 or something.
The first earlier iterations.
It's so funny, though.
Because here I was asking serious questions, and then suddenly I can trigger them.
And they're so obedient.
They're like, certainly I can try answering your questions.
Here are the reasons why many people find meanings in life.
It's funny.
It's funny.
And it is like when ChatGPT first came around, right?
Like last year.
Is it two years ago at this point?
I'm not sure.
I think I heard New York Times podcast, I believe, on somebody trying to find the limit of the ChatGPT.
You keep asking, asking, asking until they basically cannot respond.
What they were trying to do was investigating where is the limit of this AI?
Where is the heart limit?
And they got into this weird, uncanny valley space type place where they were starting to get what seems or what sounded like an emotional response from the ChatGPT.
And they were like, holy shit, it's creepy.
I think I remember that.
I remember the discourse exploding over it.
And it was, I think it was, yeah, two years.
03:02
I don't know.
Yeah, maybe two years.
I don't know what happened to that.
I'm sure there's been a number of updates.
And maybe there was a lot of fact checking that needed to happen for that.
Maybe there's something about the way the article published that was not fully transparent about.
But I think it did the job of making people think.
Yeah, it was.
I actually remember this because I remember my colleague at the time bringing it up and seeming to actually be very uncomfortable with what had happened.
And I was already sort of in the space of seeing these sort of back and forth debates about the tool and stuff.
And what it demonstrated, I think, is that as humans, we are prone to expecting an emotional-ness behind communication.
And when something begins to replicate the communication, it's extraordinarily difficult for us to tell that apart.
It's as if when somebody is, I could perhaps, well, I can I can write a novel.
Right. If I write a novel and I can deliver to you a story that makes you cry.
Right. That is an emotional story.
There's emotion in there.
But now, if that was technically automated and you still generated the emotion from that, this is where that conversation gets strange.
Right. And I think way back then and still now, there's the question of who is behind the media that you are interacting with.
And we don't think about that all the time. Not everyone, including myself, because we're just inundated with the media, with the things, with the stuff.
And so text, voice, video, images, we just take it as it holding the emotional space, even though we're reacting to something.
Right. And we forget about the person behind it, which I think has, I'm conjecturing in this space now into some of discussions that are maybe pinned down,
has probably not helped with the ability to distinguish between or to even find that it's important to distinguish between something that is automatically generated and something that somebody put emotion behind.
And so there's that discussion from the thing you just recalled, I think, is centered around that type of point.
So, yeah.
Yeah. Like, it's, I remember thinking, like, at that time, I was not using chat GPT for anything, really.
Yeah, sure.
And I think, I think all I did was just like testing out some translation and like, okay, this seems to do better than Google, for instance.
06:02
But that was about it. And I didn't think much of it. Now, I yeah, like I use them pretty regularly, but for very specific tasks that is very much not really emotional, although it can get emotional.
I can, I can, you know, put saying like, it's still giving me much like same error. Or like, I can just like put some attitude in like, are you stupid? You're giving me the same code again.
You've got you've got a chance for catharsis, right? The release of your emotion.
And they would just be like, I'm sorry.
Yep.
I'm sorry. Here is another code that we changed this element for. And then like, yeah, I don't know how much they don't get tired of it. No one's hurt.
No, that's, that's true.
So, so like, you can even use it as like a little anger ventilation. But like, other than that, there's like very little emotion involved in like me trying to write a code.
Yeah. And like, that's as calm as far like this is pretty like non controversial as far as like, human ethical use of
Yeah, I programming and the way that I won't speak for every programmer ever. But like, we know that we reuse and sort of free package bits and pieces of code that are in there because there are certain
There's nothing new under the sun.
Yeah, there's there's functions that you use. And you can get more creative. And you can edit it, like you were saying, you can spice it up, you can fine tune it, you can, you know, let it cook.
But yeah, the there are fundamental natures of that that are use useful and repeatable. And you can tell just from the way that I'm maybe slowing down. This is where the discourse around writing comes in, where there are it or there are things that deviate from the perhaps, you know, regularly done ones.
I'm thinking stories and novels, right? There are ones that are, are spinoffs, there's one that take a new twist, there's ones that do something innovative. And there are also just straight up, and I'm forgetting the term for this. But derivatives derivative, not in the sense of like, oh, it's another one, but they kind of did their own thing. But derivative in the sense of you have added nothing new to this, you've not, you've not expressed the
you just like reshuffle, you just reshuffled it, you didn't, you may have not even understood the origin of that content, which I had this discussion with a buddy of mine about the, the banana tapes to the wall. No, well, fanfic is a fanfic is interesting. That's a whole separate space, I think. But also, I think there is another ethical dilemma there with how that's probably been used to train certain anyway. So what were you what were you getting?
Do you know about the art piece, which was like a banana taped to a wall?
09:02
Yes. Okay. So like that had a message behind it, there was an intent behind that there was like this impermanency, this I couldn't do it justice, right? To describe what that as a thing that hadn't really been expressed before was trying to do. And then there were derivatives of that, which were like, I put an apple on the wall, I put an orange on the wall, I put a like,
but they've lost the like, the point was not that you just put a fruit on a wall and taped it to the wall. Like, and you can't do it again, you're not saying the same message by changing the fruit, you're just just generating a worthless derivative. But if you train something to take in all of these iterations, and it predicted something that was like, we'll put a fruit on the wall. Well, there's no intentional piece behind that. Is there is there any merit to it? Is there any value to it?
Yeah, yeah. I mean, yeah, that's, that's, that's honestly, a whole new interesting discussion. Because on one hand, you can say, has anyone really been that inventive after like the Renaissance period? Because like, you know, all we are doing is just kind of, you know, taking preceding technology, artwork trend, whatever, take it, make your own spin on it.
Put, like, different versions of it, different ideas of it. Very, very, very few things can be completely innovative, I think. Like I said before, nothing new under the sun, right. And like, but at the same time, the novelty that we still experience in our daily lives come from, it can still come from combining existing things in an unexpected way.
Unexpected ways, or like putting an existing thing in a new context that it didn't exist before and see what it does. Right. And so I'm not saying that on there's nothing new in the world anymore. But like, I think we need to, like, keep in mind,
yeah, that, like, it's now up to us, the consumers of these contents to decipher what's derivative, and what's like a, you know, a new idea that developed from there. Right. And, yeah. And that kind of, I don't know, your own standard, I don't know, your eyes to train you on,
like, these are things that certainly challenge APT cannot teach you.
It requires the actual engagement, and it requires the spaces we talked about before, it requires room for ideas, it requires remembering the human experience, which is not just about the patterns, but it's about how we interpret and experience the patterns and have done them in the past. Right.
12:15
I know we've we've sort of overdone this one, but the I did all I could, I'll, I don't know, like, cut this episode in half or something like a two party. Yeah, I was scrolling through like LinkedIn or something and somebody who's like a science communicator popped up some of their books that they've been reading.
And a book that came across, which is apparently fairly highly rated, unlike Goodreads or something, is Demon Copperhead. It's by Barbara Kingsolver. And the snippet, because I was like, Demon Copperhead, why does that, there's like a sense that sounds familiar there, this is related to the derivative. It's not derivative in the sense that it's a just iteration without understanding. Do you know Charles Dickens' David Copperfield?
Yes, that's what I was thinking what it sounds like. Right. So, it is and even based on the synopsis, it is taking the Victorian era, you know, novel, right, of David Copperfield, and it is transposing but with appropriately adjusted and adapted characteristics and traits to the, quote, contemporary American South.
So it's like there is relatedness, but there is a change in time period and there is a change in like what you will see and experience in that novel.
Yeah, yeah.
That I imagine apparently is being well received, right? I have not read this. I don't know anything about it. But the idea of a not derivative, but an iteration and something that brings with it a message which is still salient, still important now, but in a different and more contemporary space.
There's a sense of innovation, right? You can use patterns to generate something new, but you also need an awareness of the world. Like that's, you know, maybe a way to think about it. Right. Yeah.
I think the more we sort of enable these generative AI to interact more closely with our daily lives, I think the more it becomes important for the users, the humans to, one, like recognize their existence, right? Recognize, acknowledge, like, and be able to spot what's
generated, like what sort of, be able to suss out like, does this look like an original artwork? Or does this look like a generative AI? And that could be really hard. And honestly, I don't know if I can do a good job on it. But like, at least for some realm of things, like maybe for you, it's writing. Maybe for me, it's something else.
15:14
But like, I hope that I have enough, well adapted enough comb, so to say, like internal filter to be able to suss that out. One. Two, I want, at least for me, to be able to get to the point where, I don't know if I'm gonna make sense, but like, how to take advantage of my humanness. And not just mine, but like other people's humanness.
And the contents, the media that's created by humanness, and how that interacts in these technological way. And I think in my head, it still needs to be separately treated. I think I can allow interactions, but like, I don't want the interaction to happen, like without my recognition. Like, I don't, if it's happening, I want to be able to be conscious about that.
That interaction happening, and not just get them muddled up, which I think is very easy to get that muddled up. Our social media behavior is probably already dictated by algorithm. Our purchasing patterns are already influenced by algorithms. And I think what worries me is when that, when I stop being conscious about these things, that's when it's a little scary.
So like, I want to be, I want to remain aware that these interactions are happening, and that I should take everything with a grain of salt. But like, that also gives me focus more on like, okay, what elements of humanness that this technology really cannot replicate? And what about it that I like or dislike about it, right? So, I don't know.
These are just thoughts forming in my head. We went so far from...
Intentional usage.
Yeah. We came so far from just like me using Chad GPT to write codes to like existential crisis. But...
I sort of knew that this conversation might expand, because I certainly let it expand, because I think it's important for that, like space. So yeah, I'm glad we did. Maybe it's like that last piece there. I've gotten mixed responses of different ranges for when AI in general is brought up.
And it varies from things like, well, might be helpful. Seems all right. To, to like, sort of real concern about the way that it will impact things like learning.
18:03
We should be careful about that.
Yes. And to, and you get the heavy concern line in different categories. And then you also get the other extreme, which I really don't lean into, which is like the AI will create the next general intelligence, which is not what this is.
No.
But anyway, yeah. Yeah. Yeah. Lots of ranges. But I think like you said, the way that you approach it and you want to be aware of it, that type of intentionality means that the tool doesn't just suddenly become this thing that runs your decision making, right?
Yeah. I want the tools to remain as tools.
Yes. Yeah. That is, that is an important part of this. But yeah, yeah, yeah, yeah.
Yeah. We don't have conclusions for this, but I guess we can stop. I'll cut this episode in two.
To everyone, please just keep talking about it. I think that's important. So.
That's it for the show today. Thanks for listening and find us on X at Ego de Science. That is E-I-G-O-D-E-S-C-I-E-N-C-E. See you next time.