1. 英語でサイエンスしナイト
  2. #109 ChatGPTどうやって使って..
2024-06-06 21:29

#109 ChatGPTどうやって使ってる?

今の所、これしかベストな使い方が思いつかないぐらいには優秀。

-----------------------

X/Twitter: @eigodescienceLinks: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://linktr.ee/eigodescience⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Music: Rice Crackers by Aves


00:12
I think this is sort of loosely echoing to previous, I don't know how many episodes at this point,
where we talked about AI and generative AI, ChatGPT, that stuff.
So I use it somewhat regularly, almost daily, if not, for very specific tasks.
And that is almost only exclusively in code writing.
I started using them when I had to learn this new software for image processing
called ImageJ, and I think it's a pretty, like, you know, it's pretty rare, it's pretty common
sort of research level software that a lot of image processing people use to do, like,
the first layer of processing. The software can do a lot of things, but I think if you're
manipulating images, you want to do maybe more specifically, but a lot of the things
have already like plugins and functions that you can use on the GUI.
Hmm. Convenient. Okay.
So very useful. But, of course, when you're doing research, there are certain things that
you want to do in a customized format. And this software also, you can write a macro in it.
But the macro is Java-based, which I don't know anything about. I'm a Python person,
so I don't know C++ or Java or any other languages, really.
So I was like, oh, come on, it's in Java. You can write a Python code and you can import it
as Python code and run it too. That's an option. But because the software is written in Java,
it's just a lot neater. A lot of the things are a lot faster too. The code is less clunky when
you're writing in the language that they're written in, right? And they also have this
function on the software called record. So if you have a record window open and you do the
image processing manually, like clicking away and things, it would record each execution,
each manipulation you do in the macro language. So you have you don't like you can't just copy
and use that for your macro. But you have an idea of like, oh, these are the commands that
03:06
they're looking for when I do this. Or these are a class that I'm calling when I'm doing this
function. So when I found that, I was like, okay, maybe let me try writing macro in the
Java that they like. And also, it was weirdly difficult to find like Python
codes available to be used for MSJ out in the world. So I guess people just prefer to use macro
or they like are happy with like, I don't know, all 800 plugins that they have. So
but I wrote one. And then that's when the chat GPT came around. And I basically was like,
struggling one day to like, do something that what I thought was very simple, right? Like,
it's very easy to do this manually. I just don't want to click like 2000 times to repeat
the procedure. Right? You want to take care of your fingers. You don't want to like
Yeah, my dainty little fingers. So bulky. No, but like, you know, I want to be able to hit run
and go walk off. Have a coffee. Yeah. I don't know. Have lunch. Come back two hours later.
Done. All processed. Yeah, nobody wants to sit there for 2000 clicks.
And click away. So I but but my procedure itself is pretty simple. Like, I don't know,
I think it was just like, group the images in certain numbers and do a Z projection and then
normalize it like something very simple. And then like, but I was struggling to do something basic,
which is common for when you're trying to write a script for a language you don't know.
So I asked chat GPT, like, I write I like I'm writing a macro in image J software, and I want
to do, I want to write a script that does the following, right? And then it's like, step one,
do this, step two, do this, step three, do this. And then it spits out like a code that, like,
that, like, that's like 60% of the way right, right? They're not correct 100% of the time,
there's usually a hiccups, some sort of errors, because the syntax that they're pulling from
is from like, the documentation usually, but maybe it's not up to date, or maybe it's slightly off,
or they're confused with some other documentation. So they're like, mixing it with some other
software that does very similar things to image J, but not image J. And I don't know why, but like,
they don't get it right the first time, usually, but it's gets me started, right? It gets me
06:05
the like a basic framework on like, how to organize my code, and how, like, what sort of like the
common syntax pattern that this language uses? Yes, which is not, it was not intuitive for me
at all, like, because Python is very, like, if you speak English, it's fairly straightforward,
I think it's very, like, do this, like, in a very English way, right? Like, like, for
I equals blah, blah, do this, or that kind of thing. Java, like, the bracketing convention
is a little bit different. And like, it was just too confusing. And, but Chachapiti takes care of
that, right? Because there's, you know, tons of JavaScript data that they can pull from, and
it just need to tailor it specifically for my image J application for specifically for what I want to
do. So they do a pretty good job. And then I just kind of go back and forth and back and forth. Like
I do this, I try, I get this kind of error. Okay, I get this error. And they'll be like,
sorry for oversight. Like, it must be like, it must be that the variables you're calling is this,
and then they write a new one. And then if that doesn't work, I can say now it's giving me different
error, or now it's giving me the same error again, like, you know, back and forth. They don't always
get it. But like, if you do enough iteration, you as a human would also learn, like, and become
familiar with the actual syntax of this code. So that like, you can start problem solving on your
own, right? So you don't have to rely on Stack Exchange. Because the chat GPT is basically that
it's just more efficient Stack Exchange. And, and I'm using chat GPT as that. And I have not found
any other use for chat GPT other than code writing. But do you use it? And how do you use it? You have
other ways of using it? Sure. Yeah, I can, I can definitely answer that. I want to sort of link
to the learning conclusion that you arrived at there as well. So like, as you're doing it,
you are sort of getting a feel for the syntax, right? Because what's happening is you're
having to correct the code, and you're having to kind of still puzzle out the pieces that you need.
And this process seems to be like the learning part, right? Yeah. Yeah. So why am I highlighting
09:00
that? I'm highlighting that because when I bring up any large language model, or LLM, which is what
chat GPT is, then I usually present it, especially to students, as a, you can use this because it
exists. It is a tool that exists in the world. But the questions that I think are important to ask
is, why are you using it? So what brings you to use the sort of tool? And when you're using it,
how is it going to impact your learning? Are you going to be able to use it in a way
that enhances the learning experience? Or is it going to lead you with no learning, right?
Somewhere along that range. And the coding space seems like a pretty good one to do. I actually had
students pretend to be teachers in a practical experience class we had. And one of the groups
was doing a, we're teachers in a beginner programming class, how can we design
an assignment, right, that is not basically immediately solved by an LLM. So it can't be
super simple, but it also can't be super complicated, because it's an introduction
class. And like, so the idea was, well, how would you go about dealing with these problems? And
what I've introduced to them from this learning aspect are a number of papers that have come out
with the intention of using an LLM intentionally, I use intentionally a lot, but intentionally,
as in, when you're using it, don't necessarily fall for the, this is in the writing space,
not in the coding space. So, but there's some overlap here with coding, there's also more of
a history with how that was trained before maybe the LLMs for natural language came to be. But
the natural language one, it can be very tempting to be like, well, I'm interested in,
I don't know why the only thing I could think about right there was jelly beans.
But, so I'm interested in the topic of jelly beans. And I want you to write, you know, an
introduction, because I'm writing a paper about it or something, right? So I want you to write me
an introduction on the history of jelly beans. And when you do that, if you then take that and
you just like put it out as your work, right? If that's your process, then you have learned
nothing, right? Like you have learned essentially zero from the process of send intro in,
get intro out, paste to, you know, stack exchange, right? Or paste to article. And we see this
happen in a lot of spaces. It's where you end up getting articles that are vague or just
12:06
have no real meaning or no real intent or don't have, then there's hallucinations. But
the learning is lost. And so the presentation of a lot of the maybe positive approach to dealing
with these now is how do you, how do you trigger the LLM to return things like questions,
to return things in a format that is perhaps idea generating for you and not, you know,
being a really creative, predictive, you know, language model that just puts a bunch of sentences
together that sounds nice, right? How do you get something back that you can chew on and then use
the tool to go back and forth with it? To be more specific, I might ask, I'm interested in the topic
of jelly beans. Pretend that you, I really wish I'd picked a different topic than jelly beans.
Pretend that you are a professional jelly bean manufacturer. And could you give me some
questions that are like, you know, important to the field of jelly beans or something?
And what you will, if you prompt it with the right words and the right sentence structures,
which are all these little tokens that get fed into the large language model,
what happens is the outcome does end up getting tuned, right? It sort of adjusts itself in some
way because those keywords are attached to particular training, like lessons, right? It's
not perfect because, I mean, there's a whole lot of stuff happening that I also do not fully
understand. But if you do that, you might get questions back is kind of the intention. So
they're not answers, they're not sentences. Yeah. Yeah, go ahead. I have some questions about
questions. Do you find them to be a good kind of question? Like when the machine generated
questions? So I think I mixed results. It might do a job for a certain type of questions.
Like really basic level. Like these are things you should be asking given a certain type of
problem, right? Like maybe when you're giving a statistics and asked to interpret, you should
probably know what is the total number of people that's been surveyed. That sounds important,
right? Like what were the gender ratios of the questionnaire answer people or something like
that, right? Like that's kind of like a no-brainer questions. I think ChargeBT can reliably ask.
But like, and occasionally there might be some insightful questions that you don't necessarily
15:04
think about because, you know, machines don't think the same way humans do. But I wonder if
the question generating ways of using ChargeBT, if it only goes so far?
I would say in short answer, it's mixed results, right? I also like to remind myself and the
students and everyone, right, that the tool, there are philosophical debates about thinking,
as far as it comes to like these AI tools. But the fundamental nature of it is generating a pattern
of words that matches the general tone of a particular word space that it has decided that
it fits in, right? And if you can match that, as far as questioning something goes,
the questions don't have to be perfect if you as the user are engaged in the material.
Because what happens, at least what I believe is happening in that space and what I've started to
see, maybe not perfectly, is that you see the question and you either go, well, that is a silly
question, right? I haven't, I don't have a need to answer that question. Or the question, even in that
case, makes you identify that that's not an important feature. That's not an important factor
of this role. This one, though, maybe is something that I could consider. And even if you did
consider both of those questions, you've spent more time on the material, which hopefully, as a
result, is beneficial. Like your total number of minutes engaged with the material. Yeah, you can
think about it like that, right? Okay, okay. I can buy that. I would not say the questions
are like, you're not going to get, I mean, there's probably a small, small, small likelihood that you
get the perfect question of which pushes your research method forward in that particular
moment. But especially in some of, there's a danger here. But this is why I ask whenever
people use it to think about why you're using it, and how you're using it and how it will affect
your learning. At the lower ranges of like fundamental learning, there's a bit of instantaneous
feedback, if treated appropriately, can have a positive effect. But it also is dangerous because
the tool can, can kind of replace things that are not fully fleshed out yet. Yeah, yeah, yeah,
there's a there's a deep grayness that's happening in this space. Yeah, I think I think it's, it's,
it requires the users to have a certain amount of engagement, a certain amount of intelligence to
18:03
engage critically with these tools, right. And I think, what exactly does it mean to
be that is the question we're all trying to figure out. I think, as humanity, right? Yeah. But that's
why I want, I prefer chat GPT to deal with my more mundane tasks. Yeah. Like, what I'm interested
is not becoming the best macro writer for MSJ, right? Like, I want to do something else with
it. I just don't want to click away 2000 times in front of computer. And that sounds like a great
chat GPT problem, right? Like, I can wrestle with this for, I don't know, like half a day,
right? Like writing macros that work for me. And then I can keep reusing it over and over again.
And that like, I call that a pretty decent optimization of my work. Yes. 100%. Yeah.
And it's the, I'm saving myself time and just like brain leakage from happening by having to click
away, do the exact same thing manually over and over again. And if I do that, I bet by like trial
17, I'm probably making a mistake, or I'm getting tired. Yeah, exactly. Like, whereas my code does
not get tired and does not make mistakes as long as I don't design a mistake in it, right? Exactly.
So, and here's another catch too, is that what the resulting code from chat GPT and my iteration
is probably not the most elegant or most efficient code, right? I'm fully aware. Yeah. I'm fully
aware that if I want to, like, if, if somebody actually is expert at it, looks at my code,
it'll be, you know, disgusting. Like, I can make this code run in 10 minutes, and it's going to run
like for two hours for you. You hand it to a Python user, and they'll be like, I can write
this in one line of code. And you're like, please stop doing that. It's really hard to read.
That's it. And like, I'm sure, I'm sure people can do that. And but like, I'm just writing what
works for me. Of course. Yes. Not planning to disseminate my results of the code, right? Like,
what I'm interested in disseminating is the science that comes after I process these images.
And after the like, thoughts and, you know, consideration went into it. So I'm just doing
like the vegetable chopping part of the cooking. And like, you know, the actual seasoning and how
exactly I cook is like up to me still. And, you know, if I'm preparing a dinner for 200 people,
I probably don't want to chop away like cucumbers for however many guests.
21:15
That's it for the show today. Thanks for listening and find us on x at
Ego de Science. That is E-I-G-O-D-E-S-C-I-E-N-C. See you next time!
21:29

コメント

スクロール