1. London Tech Talk
  2. Beyond Lines of Code: AI-Ass..
2025-07-05 1:03:37

Beyond Lines of Code: AI-Assisted Development with David Laing

spotify apple_podcasts

This week on London Tech Talk, we're diving deep into one of the most talked-about innovations in software development: AI-Assisted Coding! We're thrilled to welcome David Laing, an expert who's been at the forefront of understanding and implementing these transformative tools.

We tackle three big questions: How does AI-assisted coding change our day-to-day development flow, what are its career impacts, and how can we effectively leverage this technology? 

David shares his practical experiences with tools, revealing significant shifts not just in lines of code, but in developers' core responsibilities, extending into planning, reviewing, and testing phases. David also shares his insightful opinions on two big drivers for potential career impacts. Although we cannot foresee the future with a crystal ball, his deep thinking will definitely help you understand the big trend with your own head. Finally, we tackle the critical question: how can engineering teams and individual developers effectively integrate AI-assisted coding? David's view into the real-world constraints and strategies for a smooth, high-performing integration was eye-opening. He also shares how human interactions have been leveled up at work thanks to new tooling.

This episode is packed with thought-provoking discussions and actionable ideas for every software engineer navigating the AI-driven landscape. 

Also check out the cool stuff David is building: Decision Copilot! Decision Copilot is a web application to help teams make great decisions. He shares his great insight on how to make great decision-making, which focuses on WHO, not only WHAT. The source code is open source.

He also started a very exciting project, "Follow-the-Sun Development Experiment: Building Decision Copilot MCP with AI Agents", which you can know more in this recording. If you are interested in getting involved, please talk to David on GitHub Discussion page.

If you have any feedback or opinions, please send us feedback via this Google Form.

Summary

このエピソードでは、デビッド・レイングがAI支援のコーディングやソフトウェア開発に関する自身の経験や見解を語ります。デビッドはAIアシスタントを利用してコード作成の効率を向上させる過程や直面した課題について述べます。また、2024年におけるAI支援の開発手法やソフトウェアのコード生成能力の進化についても議論されます。さらに、AIアシストによる開発の進化に関する議論では、Golangなどのプログラミング言語のシンタックスに対する認識の変化についても触れられます。 デビッドはAI支援によるソフトウェア開発がエンジニアのキャリアに与える影響や、経済環境の変化について思索し、特にドットコムバブル以降の技術の発展について考察します。AI支援のコーディングツールの使用法についても触れ、Claude Codeなどの最新技術の機能や課題についての興味深い視点を提供します。 デビッドはAIを活用したソフトウェア開発における意思決定プロセスや経験の重要性についても語ります。彼との対話では、AI支援開発がエンジニアの作業効率を向上させ、新たな人間関係や協力の可能性をもたらすことが探求されます。 デビッド・レイングのポッドキャストでは、AIが開発に与える影響、対人関係やチーム設計、LLM(大規模言語モデル)の効果について議論されています。また、デビッドが提唱するフォロービスサンメンタリングの実験や、グループ内の意思決定を支援するツールDecision Co-Pilotについても話が進められます。デビッドはAI支援のソフトウェア開発における意思決定プロセスやコパイロットの役割について解説します。

AIの影響に関する洞察
ken
Hello and welcome to another episode of London Tech Talk. I'm Ken and your host today, I'm originally from Japan and working as a software engineer in London, UK.
In this episode, we talk about recent tech topics, career development, soft skills and a lot more. Alright, so I'm so excited now. I invited an awesome guest. I wanted to invite him for a long time since day zero and finally I made it. So let's introduce him. Hello David, welcome to our podcast.
David Laing
Thanks Ken, that's very kind of you.
ken
Thank you. I'm very thrilled with having you here today. A brief history of me and David. I've worked with him in the same team before and I've learned really a lot every time I had a talk with him. Even after he moved to a different startup, I've had the privilege to have an occasional one-on-one with you and I'm still learning and getting inspired by you, which I really appreciate.
We talk a lot from like residency, software architecture, productivity to sometimes even to soft skill and even busy parenting. Our recent hot topic is actually AI-assisted coding and I expect this will be our main topic today as well. So David has really a lot to experience it and it's impossible for me to summarize in these 30 seconds. So I'm going to give the ball to you. So would you mind introducing yourself, please?
David Laing
That's really kind of you, Ken. I've been developing software for the last 25 years. I started my career in the middle of the 2000.com boom and I have to say that in many ways what we're going through now with the AI boom feels a lot like that.
It feels like this fundamental shift in the way that we're going to, I think, interact in the business world in general, but my experience is really all around software development. So I'm kind of thinking before the internet, I used to go down to the bookshop and I used to buy the tomes of Microsoft SQL Server or learning to program, you know, Sam learned to program in C++ in 21 days or those kind of things.
And then, you know, you'd read those and you'd get stuck and, you know, and then the internet happened, you know, and then eventually Stack Overflow happened and you kind of, I kind of look back now and I go, how on earth did you program before Stack Overflow? I mean, and I did it, you know, so the answer is, I do know how you did it, but it's hard to conceive.
But at the same time, and I feel like that's going to be true for AI coding as well.
But, you know, when I talk to folks about it, I feel like one of the wisest things that I've heard someone say to me is you can never know before the revolution has happened exactly what the revolution is going to be like, you know, so let's speculate now.
Speculating is fun, you know, but nobody, including me, knows what the hell is going to happen, except that, you know, the only thing I feel I can say with confidence is that it will be different.
ken
I get that. It's a great way to frame that. Someone is making a load of money by speculating financial futures, but yeah, expectations, speculations and what's coming to the future. And someone is feared, but I like the way you frame it.
David Laing
Right. And maybe just to sort of go back a little bit so folks can understand where I'm coming from.
So yeah, I've been in the industry for a long time, programmed in a lot of things, ended up in sort of operations, reliability, most recently.
And I've been, I think I've been kind of looking at the sort of LLM revolution from about the beginning.
I'd say the beginning of 2024, just saying, hey, this looks interesting, but not really using it.
And then I think I kind of had my, oh my goodness, this is going to change the world moment around Christmas at the beginning of this year of 2025.
So I'd say I'm kind of six months into using the tools on a daily basis.
And the sort of the, oh my goodness moment for me was using Ada and asking it to do something reasonably contained, but it's sort of write me a function that kind of thing.
And it just doing that in three seconds saying, oh, that costs three cents in API costs.
And I looked at the function and I was like, I couldn't have done better.
And I certainly couldn't have done it that speed.
And I was like, even if everything stops right now, this is going to make an impact.
ken
So the LLM gave you to some small subset of functions, which was beyond your expectations.
And you started using for like your coding and your weekend project and maybe at the workplace as well.
David Laing
Right.
So should we sort of jump into that?
Oh, yes.
ken
Let's do it.
Yeah.
David Laing
Okay.
Yeah.
So the first realization for me, I think was it wasn't so much my expectations.
I didn't know what to expect, but it was the quality of the output where I was like, you know what?
I couldn't have done better.
Maybe I could have changed the flavor a little bit.
But for this particular one, this is as good as I can be, which is not to say in any way that this is as good as a function could be.
Because I'm, you know, I'm a reasonably good programmer, but I'm certainly not a 10x programmer.
AIアシスタントの活用
David Laing
So that was the first thing.
And then the second thing what's happened was, you know, I think you and all software developers, I'm sure you have a sort of a set of weekend projects.
Kind of things like, wouldn't it be cool if or I wonder about this.
And if only I had a utility that, you know, did this kind of thing.
And so I was like, well, let me start playing to see if I can, you know, do that with these tools.
And I'm trying to think.
I think basically the answer was yes.
And it was fun, but it wasn't revolutionary until I think the Claude Sonnet 3 model came out.
I can't remember when that was exactly, but in my mind, I have sort of.
ken
I think 3.5 was fairly popular around that time.
David Laing
Exactly.
So I fired up Cursor.
And I think that defaulted to the Claude model.
And I was like, oh, wow, this is even better than I was expecting.
And I attributed it to Cursor at that time.
But actually, I think it was Claude.
ken
OK, so the behind the LLM model was the same.
David Laing
Right, exactly.
And then I was using that and I was, you know, kicking off my weekend projects.
And I suddenly realized that I'd finished my weekend project list.
Which is an experience I've never had before, you know.
Sounds very productive.
Productive is an interesting word.
So I created more code.
That is absolutely true.
Did I solve my problems?
I mean, yes, but...
ken
Partially.
David Laing
Sorry?
ken
Partially, not fully.
David Laing
So what happened?
So the particular problem domain, right, I was trying to, you know, every...
At the beginning of every year, I say to myself,
this is the year I'm going to keep good track of my finances.
ken
Right?
David Laing
My finance, right?
Exactly, which I'm sure everybody says.
And I decided, you know, I'm sick of all of these online programs
that you sort of get into and then they disappear or their prices go up.
So I'm going to try and do this in an Excel spreadsheet.
So I did that and then I wanted to pull the bank statements.
And so my weekend projects were mainly around, you know,
can I pull and categorize my bank statements?
And what I found was that I could write...
I could go from zero to 80% incredibly quickly
and then I'd get stuck.
And it would be the...
It's the kind of things which we all know about, you know,
which we've all experienced.
One of the bank accounts I was trying to connect to,
actually, it didn't work.
There was a problem with the integration in the provider
and I had to debug that and raise a support ticket
and, you know, all the kind of things that you do
to actually make software work.
So that kind of goes back to the...
Was I more productive in creating more code?
Absolutely.
Did I solve my problem faster?
Maybe, but it certainly didn't go from
I have a problem, I sit down and vibe code
and now I have no problem.
So that was an interesting experience.
生産性の変化
ken
That's pretty interesting because, you know,
the three big questions I was preparing for you is
we already talked about the first one,
which was any quantitative changes
to our day-to-day workflow, right?
And before we had a vibe coding,
let's say if I'm going to build
a similar personal finance app,
I will do the research first, you know,
API and Googling and talk to
maybe the help docs as well
and some stack overflow
and try to understand what kind of API can I do
I need to upgrade my plan and so on.
And then I start coding.
But if you have some, you know,
80% is done by vibe coding,
you have something that seems working
and you start understanding what it does
and when you're stuck, you know,
you went back to original, you know,
classic way to, you know, solve your problems.
And I see some kind of, you know,
the vibe coding changed a little bit,
but when you're stuck fundamentally,
what is required,
what kind of skills that you require is still the same.
This is my impression.
David Laing
I completely agree with that, yeah.
ken
Right.
David Laing
And in fact, I think this actually also,
so it kind of goes back to,
so measurements are always terrible
and one of the measurements we have in our industry
is lines of code.
Yeah, lock.
Right, exactly.
And I mean, it's as bad as any other measurement.
Its advantage, at least,
is that it's very simple to measure.
Right.
And the LLM technology, it loves lines of code.
Boy, can you spit out lines of code.
But we also all know that that's,
that isn't all that there is to making a working solution.
So I feel like LLMs have revolutionized
the line of code metric,
but until recently,
and we'll get to sort of talking about that,
they hadn't solved all the other problems
that is required to get something into production.
And in fact, I almost wonder,
so I saw the 2025 or 4,
one of the recent DORA reports,
which measures delivery velocity
AIの影響とデプロイの現状
David Laing
as one of their key metrics,
and they were sort of digging into,
you know, with the set of tools
that are available in 2024,
are companies shipping more stuff to production?
ken
Do you know how do we,
how do they evaluate the velocity?
Is it like the number of deployed comments in a day
or the frequency of deployment?
David Laing
Deploys to production.
I get that.
Which they have, you know,
the long-term research has suggested
that there's a strong correlation
between frequent deploys to production
and business performance.
And what their research showed actually
was that the rate of deploys to production
at companies that were heavily using AI
had not increased.
In fact, if anything, it had slightly decreased.
And they were speculating that it was because
whilst the number of lines of code
that were written were increasing,
so was the chunk size,
so was the batch size.
And typically when you have a large batch size,
you end up with more bugs.
And so actually getting something
through all the testing environments
and to production takes longer,
not necessarily because there are more bugs,
but because there are more bugs in one change.
Yeah, definitely.
Which makes debugging harder.
ken
Yeah.
I know you said that speculation,
but I totally agree with that.
I mean, if, you know,
we have a bunch of code generated by LLM,
it's easy to skip some software bugs
from, you know, 10,000 of line of code in the PL,
and when we ship that, we need to revert,
we need to roll back,
which we humans still do that.
And it's just definitely decreased the productive,
I mean, the velocity, right?
So, yeah.
David Laing
So you've actually observed that.
I mean, you were kind of,
I don't know if we're allowed to talk about this,
but the environment that you and I met in,
boy, did it have a high velocity of shipping changes.
You know, really,
if you look at the sort of official stats,
way up at the top end.
ken
Yeah, definitely.
Yeah.
That was, you know,
a hundred or thousands of developers
are working for a lot of different domains,
and they are stacking the commits,
and we, yeah, it was, you know,
I think it was a common situation
for all other big tech company, I guess,
if you have more than 10,000 engineers.
David Laing
Yeah.
All right.
ソフトウェア開発の進化
David Laing
So where were we?
So I think we sort of advanced up until about March.
And so then the other thing that seems to be true,
it's certainly true in my experience.
I remember in the very beginning
when I was watching some of these hypesters on YouTube,
and one of them would say,
he'd say,
remember what you're experiencing today
is the worst it's ever going to be.
And I was like,
that's an interesting statement.
I wonder if that's true.
And that,
certainly for the first six months of 2025,
it's absolutely been the case.
And I sort of felt like we've gone through
a step change in capability every six weeks-ish.
Either a new model that's fundamentally better,
or a new way of using the model.
And maybe just sort of skipping forward a bit
to the current day.
So I thought that the release of Claude 3.7
and Gemini 2.5 Pro
and what was OpenAI's one?
GPT-4.1
Those were all step changes
in the model's ability to write software code.
And I sort of found myself going from,
I can ask it to write a function,
to I can ask it to write a simple feature.
And it's generally going to give me pretty good code.
And so as long as I can compartmentalize the features
so that they don't have too big a blast radius.
So if you think of software as sort of a tree
and you have some core business logic at the middle
and if anything changes there,
it impacts everything.
And then you have a whole lot of things
and eventually there's something at the leaf
that uses the core business logic,
but nothing else depends on it.
ken
I really like the analogy.
So you were saying that
when you think of your business as like a tree model,
the core or the trunk or root is the core capability
and each branch is like,
let's say for example,
the commerce web example,
the core capability is like doing the checker
and showing the products
and then the branch is going to be like
implementing such functions or the cart feature.
David Laing
So is it what you're describing?
Yeah.
Or the sort of, you know,
other customers purchased,
you know, if it works, brilliant.
If it doesn't work, well,
it doesn't kill the sort of core functionality of the site
and nothing else depends on it, you know.
So there's kind of less risky portions of the code.
And so I felt when 3.7 arrived,
I was like, oh, I can just delegate those kind of things,
you know,
I actually don't need to give it as close a scrutiny
David Laing
because it doesn't matter that much.
ken
Right, right.
That's a good point.
David Laing
And then the other thing that happened around that time,
right, is,
so this is sort of a personal thing for me.
Golangの利点と欠点
David Laing
I love the ecosystem of Golang.
ken
Okay.
David Laing
I think that its compiling tools are amazing.
Its libraries seem great.
I love the fact that it compiles down
to a single binary,
like distribution is just so simple.
Right.
But I detest the syntax of Golang.
I've never actually been able to sit down
and enjoy writing Golang.
ken
I totally agree with you.
David Laing
Yeah.
Right.
You know, and I've sort of said to colleagues,
you know, I really want to work,
do this in Golang,
but, you know,
I feel like my eyes are going to bleed.
Yeah.
I can't bring myself to do that.
ken
Right.
Is it fair to say,
I love reading Golang code,
but I hate writing Golang code.
That's my stance, you know.
David Laing
I like, you know.
I think I'm more of a hater than you.
I love using Golang executables.
Like if I see an open source project
and it's written in Golang,
I'm like, great,
that's going to be a great experience as a user.
Yeah.
But I never want to raise a PR
against that project.
ken
Right.
Right.
AIの影響
ken
And how did AISS recording change that?
David Laing
Oh, yeah.
So what suddenly changed there
is I realized I don't have to read the code.
Hmm.
Right.
I can go one level up,
certainly for certain levels of complexity,
you know, of criticality applications.
And so I'm thinking CLI applications.
ken
Hmm.
David Laing
I can just write a CLI application
and I can ask the, you know,
the AI assistant to implement it in Golang.
ken
Hmm.
David Laing
But I never actually have to read
or write any Golang itself.
ken
Right.
You don't have to go to the local bookstore
and buy a bunch of Golang staff books, right,
and start reading.
100%
David Laing
and I never have to do a Golang tutorial
and I never have to, you know,
go and debug on...
Well, there's still...
Yeah.
Anyway, I think the point is clear.
But...
So what had happened for me
around the time of the 3.7 models
was I suddenly realized
that the actual syntax of the language
has become less important
than the capabilities of the ecosystem.
Hmm.
Right.
So for a long time in my career
there's been a strong argument to say
let's write every...
let's write the front end and the back end
in the same language
because then we reduce the complexity
of having multiple ecosystems
in our environment.
Yes.
Um...
And we don't have to...
You don't have to learn the syntax of, you know,
multiple languages.
Hmm.
And I feel like that constraint has...
I don't know if it's gone away
but it's certainly changed.
Hmm.
You know, so like for me
suddenly I find myself...
Oh, well, let's do this in Golang.
Great, I'll do this in Golang.
Oh, you want to do it in Rust.
Okay, let's do it in Rust.
Hmm.
You know, and it's almost like
it's an irrelevant detail.
Hmm.
At least the syntax is an irrelevant detail.
Right.
プログラミング言語の変遷
ken
That's a pretty interesting argument.
And do you think that, you know,
if the size of the software,
the size of product or the size of company
grows more than specific threshold,
let's say you start writing, you know, Golang CLI
but majority of your employee is Ruby or Rust
and...
Or maybe you're building Golang...
I'm going to pick another one.
You're writing the JavaScript
or the Ruby-based web application
but at some point your web application
started seeing performance degradations
and based on your research, you're pretty sure
you have to change your programming languages
to more like C or Rust or Golang
and at some point you have to change to the stack.
Do you think it's still going to happen?
And if that's the case, how...
I mean, AI assist coding tool
have some kind of different flavor
for each programming languages.
What I'm going to say is
maybe JavaScript, Python, Golang
is trained well from those LLM.
But what about other minority languages
like JIG or Rust or new coming up languages?
David Laing
So I feel around about the time of, you know
Claude Sonnet 3.5 and 3.7
that was definitely a reasonable concern
that they were definitely much better
in the very popular languages
with lots of training data.
So Python and JavaScript, I'm thinking particularly
and then sort of Golang.
Golang, it's advantageous
is that it's a very simple language
so I guess they don't need that much training data.
And then languages that have really great compilers
and linters.
So Rust, for example, that's really helpful
because it gives...
So then I guess then we sort of move on
to the next big change that's happened
AI支援ツールの進化
David Laing
is agentic coding tools, right?
And so all of the big, you know, cursor and WinSurf
and even Microsoft VS Code
they all have this...
You know, their latest feature is using the models
but in an agentic way.
And what I mean there
is that you give the models the ability
to read and write files
but also to run tools.
And so you can say to the model,
make this change, run the tests, run the linter,
fix those problems, you know.
And so they get into a bit of a loop.
And in that case,
if your tooling gives really good error messages
and, you know, the Rust tooling, for example,
is a great example there,
then that really assists the LLMs.
So to go back to your question, you know,
the very esoteric languages,
I would say it depends on how good their compile chain is
as to whether the models would be able to handle them.
That being said, that's a theoretical thing
because I've only ever...
I mean, I've basically used
a significant amount of TypeScript,
some Golang and some Rust.
And all of those,
the language hasn't seemed to make any difference.
ken
Right, totally makes sense.
I started shifting the gear to the next question
based on the conversation we had
because let's say, you know,
the next question I was going to ask
is about carrier impact
because in the last decade,
there are many professional software engineers
who are betting on specific technology
like someone is, you know, Ruby or Rails committer,
someone is writing a bunch of, you know,
library for Golang CLI and so on.
And we have discussed
some quantitative change in our day-to-day work life.
So the second question I was going to ask
is about carrier impact.
So looking at the carrier trajectory of software engineer,
how do you foresee AI-assisted coding
influence the demand for, like, different capability
or the skill set we have discussed?
And you can discuss the pathway for, like,
junior developers and senior engineers
or maybe software architects or holistically.
It's up to you where we should start,
but how do you see those new coming tooling
going to impact our carrier,
which was, you know, in the last decade,
it might be the common sense among us,
but it could be changed.
David Laing
How do you see it? What's your take on that?
That is such a good question, and I don't know.
I mean, I feel like in this area, the advice,
you can't really predict what the impact is going to be
until after the event.
So I think we should start this part of the conversation
with saying we are really speculating here.
Let me throw out a couple of things
that I think could be big drivers of change.
And the first one, actually,
is just the macroeconomic environment.
So I feel like in the 2010s, as a software engineer,
you were in this really fantastic economic position
where you had a skill which not that many people had
that was very commercially valuable.
And there was just a lot of free money floating around.
The interest rates all over the place
were sort of close to 0%.
So the tech companies just had silly money, right?
And so that kind of interaction of the two things
meant that I think we were really well paid.
As a software developer,
I think you were paid well above market.
And by market, I'm comparing us
to other white-collar professions
like lawyers and accountants and so forth.
I don't think we could really claim
that our work was intellectually harder or less fun
than being an accountant or a lawyer,
but I feel like we were definitely compensated better,
at least in the sort of initial phases and mid phases.
I think that post the sort of slump in the tech market
after COVID,
I think we're just reverting to the mean.
And so I think there's downward pressure
on software engineer salaries
just because we're becoming more like
other white-collar workers.
So I think that's a big driver.
We've also seemed to have entered
a much less stable period of international relations.
There seem to be more wars, more trade tariffs,
all those kind of things,
which makes it harder for businesses to invest.
ken
Oh, definitely.
David Laing
Right.
And so that's also going to have a negative impact
because a lot of software development
is working the new.
We're making new things.
We're speculating on a new way of doing things.
And so if you're in an environment
where people are more risk averse,
I think there's going to be less work for that.
And certainly my experience was that
when there's a lot of optimism in the business market,
that's a good time to be a contractor.
But when the optimism starts going down,
the contractors are the first ones
whose contracts don't get renewed.
And so maybe if you're looking for stability,
経済的影響と変化
David Laing
maybe being an employee.
I should also say,
I speak from a European perspective
where we don't have actual employment
like in the US.
ken
Actually, we also have a similar discussion
in my own country, Japan.
I mean, when the economy is going OK
and the stock price is skyrocketing,
we are encouraged by a lot of
sale help business book and by consultants
Maybe a consultant or doing a part-time job
could be the best idea.
But when COVID hit,
or in our case, when the big earthquake hit in 2011,
we started realizing that maybe a part-time job
with job security and health insurance
is going to be the best option.
We do have the same discussion.
David Laing
So we've already talked about two big economic events
impacting software development as a career
that have nothing to do with AI.
So I feel like what we see,
what we experience in the next few years,
those are at least going to help explain
some of the change.
The other thing,
we started the conversation
talking about the dot-com bubble.
And so when I started my career,
if you could spell HTML,
people would hire you.
It was insane.
Good old days, nice.
But then the bubble burst.
And just looking at the amount of money
that has been invested in AI,
especially by the big tech companies,
billions and billions,
they're talking about building
or reactivating nuclear power plants
to power data centers.
I feel like there's going to be a reckoning.
And I really wouldn't be surprised
if we don't see some kind of bust.
Which is just basically the market realizing
that there's a little bit too much hype here.
But that being said,
the models are also getting better
at such an incredible rate.
Maybe they'll catch up in time.
I go back to the dot-com bubble,
all of the things that they said
were going to be the case,
online banking,
e-commerce,
getting video on demand,
we have all of those things today.
In fact,you almost can't imagine.
I talk to my children
about getting videos on DVD,
and they can't comprehend the idea
of having to get a physical thing
to watch an electronic.
Or like watching things on TV
where you can't pause
when the popcorn goes bing
and you want to go fetch it.
They're like, sorry, what?
What was the point I was making?
The point I'm trying to make is
all of the hype in the dot-com bubble was true,
but it didn't come as fast
as the hype was expected to come
because we had to build
the broadband infrastructure first.
We had to go and lay cables.
And once that landed
and everyone had stable fast broadband,
then the reality became true.
ken
There was a constraint
in the physical fiber.
David Laing
Right.
And so I feel like
we may see the bubble burst
in the AI world
if some kind of constraint emerges
that we hadn't been considering.
I can't put my finger on what that might be.
But yeah, anyway, so that's that.
ken
I mean, it's really thought-provoking for me
that you see the, you know, these days AI bubble
compared to what you have experienced
in the bubble dot-com bubble
because for the bubble dot-com bubble,
I know by text.
I know from textbook.
I didn't experience by myself as a, you know,
like working as a full-time job at the time.
So I don't, I'm not sure how people are going to feel
when that happens.
So you said the two biggest drivers
is macroeconomic and interrelationship among countries.
But if I, I'm allowed to add one more thing on top
that it's going to be maybe the humans,
people's emotions.
And I think people's emotions
is going to be the big driver.
I mean, how they're going to react to the new world,
you know, happening in some countries
or how people react to the skyrocketing stock price
might be, you know, differ
based on your background culture
or your own, you know, personal final states
and so on.
So I'm not sure how people are going to react to that.
And this is uncertain to me.
人々の感情と採用の傾向
ken
And I totally get that.
You could speculate what was coming
during the dot-com bubble
that you could not speculate when it's going to happen.
I think that this is, we can say the same thing
of the AI dot-com probably bubble, yeah.
I totally agree with you.
David Laing
Yeah, I think so.
The other thing that I believe to be true
is that a question based around
will people do A or B
there's a fundamental problem in the question
because it's not a yes or no answer.
It's a, well, some people will do this
and some people will do that.
And, you know, there's that famous adoption curve.
And I think a better question to ask is
how many people will be early adopters?
How many people will be, you know, in the middle
ken
and how many people will never adopt?
AI支援ツールへの取り組み
ken
That's great questions.
I mean, I even don't know
David Laing
if I am early adopter or later adopter or later, you know.
ken
Right, exactly.
David Laing
Self-awareness is the first step for me.
Yeah, so I'll say for me, I consider myself
an early adopter, but not a
whatever the most extreme is, like
I'll adopt the tools
as soon as I can get some benefit from them.
But I'm prepared to put in the work
to try out a bunch of different tools.
But not necessarily write my own tools.
ken
Right, right.
And the follow-up question I want to ask you
as an early adopter of AI-assisted coding tool is
OK, so we discussed about maybe big drivers
is pushing AI bubble
and we cannot speculate what's going to happen when
in the future, but still
AI-assisted coding tool is fun.
You know, live coding is fun.
We can, you know, generate a bunch of code.
And so as a software engineer, I am very excited about
new tool is coming, new model is coming.
And I see the potential and the benefits of
using those two at the workplace
and my weekday project as well.
So the last question, the big question I wanted to ask you is
how you leverage those technology?
I mean, considering and
but not only for the beneficial part,
but we want to also discuss about like pitfall.
Like, you know, if you start asking
writing a bunch of code,
maybe you might spend more time on reviewing code
or if LLM gives you unsecure code,
how are you going to, you know, handle that?
So the last question is
how can we leverage technologies better
effectively integrate such kind of AI-assisted coding
into your workplace, your day-to-day work
dev loop?
David Laing
So I don't feel like I know enough
to make general recommendations.
So what I'm going to talk about is
what's working for me.
And if it works for you, brilliant.
I'd love to hear more.
And if it doesn't, remember that
I'm one guy on the Internet, right?
So I think what's talking about
everything sort of like levels up every six weeks.
What happened about three weeks ago
was two things.
Claude Code, the CLI agentic programming tool
from Anthropic.
It went 1.0, but it was usable before that.
And Anthropic released their Claude 4 series of models,
Claude 4 Sonnet and Claude 4 Opus,
which are really good at using tools
and following directions.
And to me, that felt like another step change.
And so what's happened for me is
I feel like I've literally...
I was writing machine code
and then the compiler came along
and then I learned to use high-level languages.
I feel like that's happened again
and I've stepped up one more level.
And the mental model that was taught to me
by a colleague of mine, Gareth Smith,
was...so he's a...
he has a PhD in computer science
and he spent some time mentoring other undergraduates, right?
And he says, I'm going to approach this
like I'm mentoring an undergraduate PhD student.
So they've read all the books, right?
They can solve their own problems,
but they have a bunch of blind spots
where my experience can be helpful.
So I'm going to treat the...
When I'm working with Claude Code,
I'm going to say, Claude Code is my PhD undergraduate student.
What I need to do with my PhD undergraduate student
is spend a lot of time with them
figuring out what they're trying to do,
help them understand the bigger context.
So, you know, sit down and write a markdown document
which says, here is the goal that we're trying to get to.
Here are some of the constraints that we know about, right?
Let's talk together.
So then you work with the model.
You say, let's make a high-level plan
or let's consider three different ways
we could attach this problem.
Think about the pros and cons.
And then we can sort of narrow down and say, right,
here's the step one, do this.
This is the technology stack we're going to use.
Step two, we're going to implement this piece first.
Step three, we're going to have
this kind of test framework and so forth.
ken
Interesting.
Can you tell me more about that, more practically?
So at the end of the discussion, what you will get?
You will get the list of bunch of markdown file
explaining what you're going to build?
David Laing
Right.
And then a sort of a high-level plan of, you know,
if we were making a project plan,
the models will very quickly actually make a project plan
where it says week one, this kind of things,
week two, that kind of things.
The weeks are completely wrong
because they can implement them in like, you know, half an hour.
It always amuses me.
ken
Like a PhD student saying that,
okay, I'm going to finish writing in one month, right?
David Laing
Right, exactly.
But the models can actually do it.
They're sort of tireless.
AIとの計画策定
David Laing
But the important thing is the sort of chunks of work.
And then you say to the model, right,
let's plan this chunk of work really in detail, step by step.
And then you sort of have an opportunity
to look at the steps and provide feedback.
And this is where as an experienced software developer,
your experience is incredibly high leverage
because you can sort of say, you know,
I know that doesn't work.
Don't use this.
You said in this part of the plan,
you were going to use this framework,
but in that part of the plan,
you say a different one, you know, why?
Can we just sort of simplify?
ken
So you see the contradictions between their plan
and then you point one by one
and try to come up with a better outcome, right?
David Laing
Right, exactly.
And it's kind of weird
because the models are really, really, you know,
sometimes they're incredibly smart.
They do really sensible things.
And sometimes they do really dumb things, right?
So an example, one of them was
I was working with a model
and we were doing some stuff around.
We're going to use Git repositories to do a thing.
And so, you know, the first two thirds of the plan
was all about how we can leverage Git.
And then the fourth part of the plan was like,
oh, and then we just manually copy some files around.
And I was like, what?
You know, why is that part of the plan?
And then you can kind of challenge the model and say,
and this is an important thing
that I've learned from Gareth.
You don't tell the model what to do.
Just the same way you wouldn't tell
your sort of undergraduate what to do.
It's like, don't fish for the person,
teach the person how to fish.
So you say to the model,
I noticed in step four, you're copying files around.
How come you aren't doing it with Git?
And the model will normally say,
oh, that's a good point.
Or occasionally, no, the reason is.
ken
Nice.
And I want to understand the procedure more
and I want to step back a little bit.
And if I see that workflow holistically,
when and who does the final decision making?
I mean, the decision making of which framework I'm going to use,
which function I'm going to use.
Am I right that you're discussing with your PhD student
or you're discussing with your LLM,
but you are the final decision maker
or you have some kind of democratic decision making process?
How would you describe that
David Laing
from the point of decision making process?
意思決定の責任
David Laing
I'm glad you asked about decision making progress.
This is because that will get to one of the things I'm interested in.
I think it comes down to accountability.
ken
Accountability.
David Laing
Who's going to be accountable for running this,
for what this source code does?
And if it doesn't do what people expect it to do,
who's going to get the page or the phone call?
And that's two different levels.
If it's a personal utility, it's going to be you.
If it's something you're making for your company,
it's either going to be you or your team
or some other poor person who's holding the pager.
But I've never seen it be an AI model.
So I think it's going to be a person.
And I think for that reason,
that those final decisions must always land with people.
Maybe that will change, but it's certainly not true today.
And so I think that's why the sort of working on the plan first
is a really fast way to drive those decisions out early on.
And I find it very helpful to ask the model
to give some different options
and give pros and cons for those options.
Or to open up a separate conversation with a different model
and say, here's the plan,
give it the markdown document that we've been making,
critique it, you know,
or act like a security hacker or something.
Where are the holes in this plan?
Oh, interesting.
So you can absolutely use the LLM tools
to help you explore the boundaries of the problem
and to think of the pros and cons of things.
But at the end of the day,
you're responsible for making a choice.
ken
Oh, that's great.
I think I really like the way you frame it.
I especially like the part that you said
once you have some foundational plan,
but you come up with a different angle
and ask other LLM to come up with any security perspective
and by pretending to be the security engineer
and you use that output as an input
to the plan you already have
and try to improve from a different angle,
maybe performance perspective, security perspective,
maintainability perspective,
not only doing what you wanted to achieve in the first place,
but through the process,
you harden your plan like a production-ready plan.
That's what I took from your explanation.
David Laing
Yeah, and my experience has been
like the first time you ask,
you start a new conversation
AIの影響と作業効率の向上
David Laing
and it doesn't even need to be with a different model.
In fact, I think that makes less of a difference
than starting a new conversation and saying,
consider it from this perspective,
act as a whatever and look at it.
And my experience has been that
you ask for a set of improvements.
So you say, please give me five improvements.
It'll give you five.
Boy, do these things love generating text.
Ask it for 50, it'll give you 50,
it's just overwhelming.
The first couple will be spot on.
They'll be like, wow, I never considered that.
That's such a good point.
And then they'll kind of have reducing levels of impact,
say, now you're just being nitpicky.
So ignore half of them,
but then go back to the original model and say,
well, what about this point that was raised?
ken
How can we improve the plan to address that point?
Interesting.
All right.
I have one potentially tricky and spicy question.
Because by listening to the workflow you mentioned,
it was actually deja vu for me.
And why deja vu is that it was what happened
when I was a junior engineer,
when I was working for one of the startup in Tokyo.
I had a really good manager, a tech lead.
He taught me a lot.
In your example, I was a PhD student and he was a professor.
Every time I wrote some design draft,
he gave me a lot of insightful advices to me.
So I felt like the same story was happening there.
But if LLM can do this,
the last question I'm going to ask is,
why it's spicy is how it's going to change
our human or colleague's relationship at workplace.
Let's say you can do that.
Assume that I am a junior engineer
and let's say LLM could potentially act like my old boss,
and gave me a lot of good questions or the other way around.
It's going to change our software engineers'
human relationship at workplace.
If you can get nice advice from LLM,
which you used to get from colleagues in the last decade.
David Laing
Absolutely.
How exactly will it change?
Or will that change come from the other big macroeconomic factors
that we were talking about?
I don't know.
We're back in speculation territory.
Let me just finish off.
Just to finish the method that I'm finding myself using these days.
So make a great plan.
And then,
and I've actually had some success doing this in parallel.
If there are parts of the plan that can be done in parallel,
make a new coding session,
and say,
go and implement this part of the plan.
And the current batch of models will go from 0 to 95%,
incredibly well.
And then they'll get stuck,
and you'll need to go and rescue them.
I almost feel like they sort of fall in a hole.
So my work pattern is,
I will use the multi-desktop feature
that all modeling operating systems have.
And I'll set up an instance of my editor
and an instance of Claude Code running in it.
And I'll say,
OK, Claude Code,
your job is to implement this part of the plan.
Go.
And the important thing is
that I then try and give it the tools and instructions
so it can check its own work,
and that's very helpful.
So we went back to run the linter,
build tests to prove that the function works,
run the whole test suite every time you make a change.
If you can get it into that loop,
then they get themselves really far.
And then I flick back to a different screen
and I do something else.
And I'm definitely able to have the plan implemented
from 0 to 9
with almost no looking at the individual code
that is written,
except for the points where it gets stuck.
OK, so that's how I'm finding myself
building software at the moment.
And what is definitely true
is that I'm pumping out
significantly more software.
Am I pumping out more features?
Back to our original thing?
Not sure.
The data's not in yet.
It's definitely making
talking about the volume of change harder
because there's more change.
So when we sync in our teams,
we don't just say,
'Well, I implemented this one story.'
It's like,
'I implemented these three stories.'
So that's something we're going to have to figure out.
So right back to your question,
how is this going to change our relationships at work?
So for me,
my preferred way of working
has been for many years pair programming.
And one of the things I like about that
is that when my energy flags my pair's energy,
it's not always in sync.
So I feel like we don't get stuck as often.
And I found that constant forward momentum
very exciting.
ken
Right, making progress continuously.
David Laing
Making progress continuously, exactly.
I've pretty much stopped pairing on writing code
because I'm essentially pairing with the AI models, right?
LLMの役割と影響
ken
Interesting.
David Laing
And so it sort of feels like
I was pairing writing machine code
and now I have a compiler,
so I don't pair on writing machine code anymore.
ken
And you still feel the same momentum with LLM, right?
David Laing
Absolutely.
But I am pairing...
ken
They are opinionless.
They don't have strong opinions.
David Laing
They have whatever opinions you tell them to have,
which is the weird thing, you know?
And they can hold both of them at the...
If you ever think...
Lots of people say, don't trust the LLM.
And they're right.
But not because the LLM hallucinates that much.
That doesn't seem to happen,
at least in my experience, as much as it used to.
But they do argue from a particular point of view
very consistently.
So if you say to them, blue is green,
make this argument,
they will consistently argue from that starting point.
And a great way to test it
is to say one starting point to an LLM.
International trade is really bad for the economy.
Say that and then ask a trade question.
And it will say that you're brilliant.
That's absolutely right.
And then give you lots of arguments
as to why international trade is bad.
Then in a new conversation,
or even the same conversation, say,
international trade is brilliant.
Give me some arguments.
It will then turn around and say,
you're absolutely right.
And give you another set of good arguments for, you know,
for why...
So it's not that they're hallucinating,
but they're very sensitive to the starting conditions.
ken
Right.
David Laing
Fairly sensitive, yes.
ken
I got that.
David Laing
Sorry, that was a bit of a rabbit hole.
Going back to your question about how it's changing
interpersonal relationships.
I'm finding myself,
I still love working with my team in a...
So I work totally remotely,
and we work in a shared...
a sort of a shared online space.
I still love having my team there,
but we're all supervising our own set of AI interns.
And then we're having conversations one level up.
We're having architectural conversations.
We're having...
My AI is stuck on this thing.
Have you seen this kind of problem before?
How can I help it get it unstuck?
I'm considering these two options
that the different LLMs have given me.
Any thoughts?
So I feel like the pairing has gone up a level.
Yeah.
ken
And you still have other colleagues
that you can talk about,
your own PhD student,
what's helped you, right?
David Laing
Exactly.
So I feel like we're now having conversations
about how do you mentor your PhD students.
ken
It's like professor chatting in the coffee room
about where we set new PhDs.
David Laing
I get that.
The one thing that I think is probably going to be true...
So I feel like actually we're entering this world
of organization design.
And the big thing with people organization design
is the fact that as a team gets larger,
the overhead of communication doesn't get larger linearly.
It gets larger.
I don't know if it's geometrically or exponentially,
but if you have three people
and you add one new person to that group,
you don't add 25% additional overhead to the conversation.
You add a much larger number to that.
And so I feel like because all of us can produce
so much more code,
that a much more efficient way to develop systems
is to have less people
and hence less communication paths.
So I kind of feel like the magic number
has always seemed to be...
People talk about the two-pizza team.
I think maybe the magic number
is going to become one pizza.
Four people.
One large pizza, yeah.
The other thing, and I haven't done...
This is just a thought experiment,
but I'm very excited to try it,
is in pairing,
pairing works amazingly
when you're in the same time zone.
And doesn't work at all
when you're in different time zones.
Pairing on supervising the LLMs,
it's actually kind of painful in the same time zone.
But maybe it'll work across time zones.
And so the experiment I've got lined up
but haven't run yet
is I want to work with someone...
So we're based here in the UK.
And I have lots of colleagues
on the West Coast of the US.
So I want to try to work with someone
on the West Coast of the US and say,
let's you and I supervise these AI PhD students.
I'll supervise them during my work hours.
And then we'll have a one-hour conversation
in our overlap.
And then you take over supervising them
during your daylight hours.
And so in every 24-hour cycle,
we've managed to supervise the agents
for whatever it is, 18 hours.
Oh, nice.
ken
Are you still evaluating that?
Did you get any outcome?
フォロービスサンメンタリングの実験
David Laing
No, that is the next experiment I'm running.
ken
Oh, that's great.
I'd love to hear what's going to be the outcome
from your experimentation.
David Laing
Yeah, well, we talk frequently.
Maybe we'll talk on this podcast
in the future about that.
ken
Yeah, definitely, sure.
David Laing
Maybe we should just kind of say,
so if there's anybody who is not in,
you know, a European time zone
and is interested in doing this kind of experiment,
I'm super open to giving it a try.
And it sounds like you might be interested
in doing that as well, Ken.
ken
Yeah, definitely, yeah.
That's a great caller to our listener
because most of our listeners are,
around 50% are listening from Japan.
So if you're interested, please reach out to us.
David Laing
Oh, excellent.
Right, and so I have a great network
on the West Coast of the US.
So maybe we actually could do a three,
three supervisors, 24 hours out of the day.
ken
Oh, that's going to be exciting.
David Laing
Yeah, that's going to be a little bit chaotic.
ken
Yeah, that's another part of the experiment, right?
Let's see if it's really going to be chaotic or not.
That's what we're learning.
Cool.
Well, thank you very much for sharing
a lot of insightful,
your insight with me and with our listeners.
So I think it's,
I have one last final closing question to you.
I ask the same question to all of our guests,
which is,
is there anything that you are passionate
to share with our listeners?
And anything you're building,
it can be unrelated to the topic we discuss.
It's basically the safe space for all of our guests
to show and tell what you're building,
your family's business, your portfolio,
your new article, your upcoming books.
If any, please feel free to talk as long as you want.
David Laing
So thank you.
ken
Thank you, Ken.
David Laing
Just thank you in general.
I think to answer your question,
I think we've touched on one,
which is this experiment of,
you know, around the follow the sun mentoring.
Is that even a viable way to build software?
Is it effective and is it fun?
And I have a,
you know, I'd be prepared to place a bet
that it could be both,
but the data isn't in.
So, you know,
but if anyone wants to do that,
that experiment with me,
that would be really fun.
意思決定を支援するツール
David Laing
And then the second thing is,
this is a project that I've been working on,
on and off.
It's around structured decision making
for groups of people.
And so you can include links in the show notes, right?
ken
Yes.
David Laing
So the software is called Decision Co-Pilot.
And the idea is that it's a tool
that acts as a co-pilot
to help a group of people make decisions.
ken
Exciting.
Decision Co-Pilot.
I love the name.
David Laing
The tool itself, right?
Okay.
So my experience in working with companies
is that it's very, very difficult
to make good decisions.
Right?
But it's very easy to make bad decisions.
And in fact,
most of the bad decisions that I've seen in my career
have nothing to do with the choice
of whether it was option A or option B, right?
That isn't where the problem is.
The problem is around
who's involved in making the decision.
Do they understand what role they're playing
in making the decision?
Interesting.
Wow.
That's pretty eye-opening to me.
Yeah.
Right.
So what I found myself doing
when it's a group of people
ken
and you're trying to make a decision
that is, you know, potentially contentious
David Laing
is that a very helpful technique
is to say,
everyone, before we argue about
whether it should be A or B,
how are we going to make this decision?
Right?
And then,
if we're going to make this decision,
we're going to have to make it
Right?
And so you try and just say
who should be involved in the conversation?
Yeah.
And it's less about exactly who is involved,
and it's more about people understanding
whether they're expected to be involved or not.
Oh, wow.
ken
I thought the opposite.
I thought what we were discussing was the mousekeeper,
but you were saying the opposite,
and, wow, I've definitely checked that.
Very interesting topic.
Right.
David Laing
Anyway, yes.
So this software is about
trying to be a copilot,
trying to hold your hand,
and say,
I can't guarantee that you're going to make
a good choice between A or B,
but I can help you make sure
that you're not going to make
common problems
along the lines of
not including the right people,
having people in the conversation
who don't know what they're supposed to be doing.
Am I just supposed to be
throwing in opinions,
or are you actually going to ask me
to be accountable and,
意思決定のプロセス
David Laing
you know, make this decision?
And then once the option has been chosen,
A or B,
does everyone know
whether we've chosen A or B,
because
it doesn't matter what the choice is,
but if you have a situation
where one group of people think
that we're doing option A
and another group of people think
we're doing option B,
that just can't work, right?
Oh, yeah.
ken
Yeah, I'm trying to get your point.
And your decision, copilot,
tried to solve that problem.
David Laing
Exactly.
Nice.
So it basically sort of puts
a bit of structure in place
to hold your hand to say
you're making a decision.
Excellent.
Who's involved?
What roles do they have?
What are the options?
Have you made a choice?
Great.
I'll email everybody what the choice is.
That sounds great.
Wow.
ken
I will definitely check those links
and I encourage our listeners
to check as well.
エピソードの締めくくり
ken
I'll include all of the links
you want me to share
in our episode notes,
so please check that.
David Laing
Fantastic.
ken
Amazing.
So, wow, that's great.
Okay, let's get to closing.
So thank you, David.
I really appreciate you being with me
on London Tech Talk today.
And thank you for coming.
And I will definitely want to record
another episode with you,
so let's keep in touch.
And thank you, our listeners,
for listening until the end.
And we are looking forward to your feedback.
David Laing
Yeah, thanks, everybody.
Thanks for giving me the chance
to share my experience and opinions.
ken
No, I'm not a one-time offer.
There are so many.
David Laing
Great.
Great.
All right, everybody.
Hope you have a great day.
ken
Thank you.
Take care.
David Laing
Bye.
01:03:37

Comments

Scroll