00:11
Hey then.
Hello, Asami.
I feel like that was that hello or was that a hey yee?
Hey yee, wasn't it?
We can do this again.
I don't know.
It might have been that
my microphone got...
No, no, it's good.
It's good. Maybe it was just me.
So...
Well, I've been told by
35歳右左の
35歳右左のないちゃんさん
So...
I've collaborated
with them and stuff recently.
And she says that
she looks forward to my hey then.
That's such a bizarre place
to look forward to.
I don't even say...
I don't even think I say it the same way
every time.
It's probably best
if we never answer that question.
Because I feel like it will
create an internal awareness
of it that we don't need.
She made me self-conscious.
I think that's really great.
I think that's super sweet.
And it's also like...
It doesn't have to be
the same so much as it has to be
the opening.
It triggers the fact...
And also in real life I never say hey then.
I feel like I just say hey.
You just say hey.
Hey you.
Anyway.
So we were...
We have our stock list of topics
like Netacho.
We just share a Google sheet
and jot down notes
whenever we come across topics
links to articles
and videos that we want to
talk about potentially on
podcast. And I see
unusually high
frequency of topics around
AI.
I think it doesn't
help that you teach about
how to relate to AI
at school to a group of students
and also I use
machine learning
which is a little bit different from AI but
in our sort of umbrella term
we're
conflating them right
now for convenience.
I use machine learning for
a good part
of my research.
And needless to say
I think I
am a slave to
the chat GPT.
I talk to
it regularly
telling them to write me a code
or like telling me
to... I mean that's all I really ask them
to do. Asking them to write a code
that's kind of what I do and then
tell them like yo this gave me
another error. Like what's wrong
with you? And they're like
oh I apologize
03:01
for overlooking some of the issues here
and then
calmly resolves the problems
or at least tries to.
Anyway
so it's safe to say that
I guess like we are
pretty
excited to talk about AIs
in general
and I think
it's also interesting given
that I work in
some kind of an art space
as well. Like I worked
for the museum and
it's a contemporary art museum so there's
a lot of artists
making active dialogues with
what does it mean to be an artist
in the age of AI?
Like what does artwork mean
nowadays?
And these
are sort of hot topics
of discussion both in
art world and in
my research world and
in your teaching world so
I think it's safe to say that
going forward we'll hear us
talk a lot about
AI. Probably.
And
I guess this is just a
preview to share
like where we stand so far
as of November 26
2024. You know who knows
there's going to be like
ChadGPT 6.0
that like
blows our mind in like the next
two weeks so just
timestamping it there.
You've
timestamped it. I will
go on record to say that's probably not
happening with my current knowledge
of the situation but
it probably won't but who knows?
But yeah so
and if I could take a step back
just to add sort
of what I'm doing there right
in my space
of the sort of AI
the relation
to AI it's not about like you know
sort of the full
connectivity to it it's done
in the ways of both within a writing
classroom right the fact
that large language models exist
influences the entire educational
pedagogy I think in a way
that shakes up a lot of the foundations
that exist which can be a good
thing but only
if done with care
and also right I spend time
sort of
opening a dialogue and a
critical discourse around
not just AI
mostly it starts with LLMs
but then I let the students kind of expand
into their own interests and things as
we go. I see.
So yeah I have that sort of take
whereas you've got a very sort of
more hands on both with
using it to get the code
strut up as well as you know
describing and working through
machine learning which
is at the core of
these AI models.
So
you're coming from more of
06:01
writing centric
idea so largely focused
on large language models
LLM or natural language
models maybe NLM
and
for me
my background
my degree has nothing to do with
machine learning and
it's all 100% physical
experimentations
but my current
project does involve
heavily getting into
machine learning
and specifically image
recognition so image pattern
recognition image classification
type problems
and so as a postdoc
I am like playing
a lot of catch up to be honest
with
the grad students with the professor
here on
how to just like
general principles of machine learning
but also how
to specifically apply
machine learning in a way that
as a scientist
you know feel legit to do
so
people in my lab a lot
of the other students are
focused just on
sort of more
like let's say
artistic rendering
or you know
how to make
how to track eye tracking movement
better in the VR
setup or
how to
what does it mean for
a video character
to convey emotion like what sort of
facial movement
convince us to have
emotional reaction to
the video character like those are the things
that people are interested in my lab
most of them at least
and some of them
including me and the
one I'm working most closely with come
more from the physics background
so we're
not really interested in the kind of
like completely synthetic data
which is a big part of
training machine learning and it's important
but
what we're interested in is like
stay within the
physics realm of things
we still want to keep
ground truth to the physical
reality
but
utilize the capacity of machine learning
to do a lot of things that humans
cannot do without
putting a huge assumption in it
and
so I think
I fall into the category
of like more modest user
of
machine learning in that capacity
and in my sort of
main field of
conservation science
I know for a fact that
a lot more people are a lot
more uncomfortable with
the idea of using
machine learning to interpret
09:01
scientific results
I'm not gonna
start talking about it now but I think
these are topics
like where science
meets machine learning is where
I'm particularly interested
in
as far as the
AI discussion goes
I'm only really interested in machine learning
model as long
if we can interpret
well so like
interpretability for a model
is very important for me and it is for
a lot of other people it's like one of the things
you know criteria of things that people care
about
but I feel like
by large
sort of societal
impression when people hear
AI and machine learning it's like
black box and like
you know things that come out
of nowhere and I
want to challenge that notion
maybe in our future conversations
sure
and yeah as I learn also
because I'm still learning I'm like
still new to
this world I'm hoping
to start training my own neural
network sometime soon but
even
in that process it's like a lot of
different new things I have never done so
you'll be learning alongside
me
and
I am
very much looking forward to I think
your take on some of the sort of visual
aspects when we get into these
discussions about
the uses of ML
and especially in the art space and
some of that in
definitely in the
context of you know what is
art and art in the space where
you know essentially a tool
can design not what I would
necessarily consider art but essentially
put things into a
visual representation that people
do in fact perhaps get a
certain positive response from we can go
into those later
but also
you know to touch on
we might find ourselves
probably down the rabbit hole of you know
a philosophy of art right of what does
it mean to
have art and I could
I have a book on my tsundoku
list that kind of
goes into the
exploration that
who knows when I'm going to
get around to but yeah
there's yeah the
lists of these things certainly for
stuff to read definitely just endlessly
growing I understand
that entirely
and I would also like to add since we're
sort of I think we're teasing things
that we would like to talk about right
in this episode here
to tease
both the
the
interpretability again so when you
say interpretability I think for the listeners
what I think
12:01
about is
sort of being able to follow
the process that a model takes to
arrive at like the output right
a way that you can almost you know backtrack
is that is that essentially the same page
that we're on for interpretability?
Yes like I mean
it can you're right it can
mean a lot of different things
but for
as far as I'm concerned
and how
I'm going to be interested in
is
can we understand their logic
you know it can happen
in like a retroactive way after
they spit out the results
but we want to know
you know what's going on
what's skewing their performance
what's
influencing
the content of the model
and their performance
how do we even evaluate them
what is a good way to evaluate
a model that's also a big question I think
that I so
far have only heard kind of hand
wavy like oh this is
how typically people do
kind of way
I have not gotten
a satisfactory answer to
most of my questions like this
I'm definitely going to send
you some papers that I ran across the other
day when I was up
way too late and should not
have been doing that at that time
but also yeah so okay
thank you for sharing that because we are
on the same page with interpretability
which I think means that it's worthwhile
to drop the term
XAI because everybody
loves an acronym
that's sarcasm in case that didn't come across
clearly in this recording
this is XAI
which is a
category of research studies at the moment
that seems to be looped mostly
in kind of the educationally
focused fields with
AI incorporation
the X is for explainable
AI so they took the
X instead of the E or
an EX it's just X AI
I see I see
and this is something that I have yet to
fully dig into but I can share things
as well with that
no it's definitely a
trend to try and
not just use AI but like
as a way
to
use intentionally
you need a model that
you can explain
and describe not just like oh
it spit out this thing
especially if you are trying to do it in an educational
space where the tool plays
a different role than what
maybe the current ones would play
right the current ones you can play
the purpose is different
the current ones are something like
as long as you don't treat it as it
having logical features that
you can follow and learn from but you treat it as
like a trigger for your own thoughts
that's a very watered down version
then it's usable
in educational spaces
15:01
if you don't have that
or if you want the tool to be
explainable or to
be able to follow that path and learn
from that then you
need it to be more
transparent as to how it
gets there which also comes
with losing possibly
accuracies and yeah you mentioned
benchmarks and how do we measure it so
good there's a whole bunch there that I ran
across just in the last 48 hours
and I'm just like yeah this is a bunch
of anyway
so as you can tell this is
one of our favorite activities
to muse about AI
and particularly Len
it even keeps him up at night
why do I do this to
myself
so
if you're listening to this episode
and
want us to focus on
specific sub topics
regarding this giant topic
or
if you have a question
that you would like us to explore
shoot us
a message I guess
ping us tag us
locate us
what's the new hip words
I don't know what the hip words are
we haven't really figured out how to have this
like open
mailbox situation
with like a google form or something
but we could do that
we'll see
you'll find a way to
reach out to us
leave a comment like and
subscribe
do all of that stuff
so that we might
give our
two cents to your questions
yeah I would be excited
to hear what everyone else is thinking
I think at
this point there's also a chance that I end up
deeper into this field than I anticipated
so like may as well have a space to talk about
stuff
alright
yeah yeah
that's it for the show today
thanks for listening and find us
on x at
egodescience that is
e-i-g-o-d-e-s-c-i-e-n-c-e
see you next time