Jan. 27, 2026

The Compounding Advantage: Leveraging AI for Smarter Creative Work

The Compounding Advantage: Leveraging AI for Smarter Creative Work

In this episode, we dive deep into the evolving relationship between human creativity and artificial intelligence. Inspired by Ada Lovelace's early vision of creative machines, we explore how the boundaries between expertise and common sense have been reshaped by modern AI, from expert systems to today's generative models. We sit down with pioneers and practitioners—Vasant Dhar, a longtime AI researcher and author of Thinking With Machines; Christopher Mims, technology journalist and author of How To AI; and the creators of Tachi AI, Aden Bahadori and Brett Granstaff—to discover how AI is shifting not only what we make but how we make it.

We unpack the promise and the pitfalls of treating AI as a true thinking partner, not just a tool for automation. Our guests share practical strategies for using AI to augment creative work, streamline tedious tasks, and enhance idea generation—while emphasizing the necessity of human framing, expertise, and judgment. Whether you're a leader, designer, marketer, or filmmaker, we reveal why using AI thoughtfully is the real competitive edge in creative fields and business.

Five Key Learnings:

  1. AI’s Compounding Edge: Utilizing AI consistently and benchmarking progress gives creatives and teams a multiplying advantage—not by replacing human originality, but by amplifying it through incremental improvements.
  2. Framing Questions Matter: The ability to ask the right, nuanced questions remains fundamentally human, and is essential when using AI as a partner in ideation, research, and strategy.
  3. Context and Expertise Are Critical: Experts benefit most from AI—leveraging their knowledge to dig deeper, validate outputs, and push beyond generic solutions, while ensuring originality in their work.
  4. AI as Scaffolding, Not a Substitute: The greatest value of AI today is in reducing friction and clearing time for creativity—whether it’s summarizing information, managing knowledge, or prepping film edits—so humans can focus on what matters.
  5. Human-Centric, Supportive AI: Tools like Tachi AI demonstrate that supporting creativity is more transformative than automating it; AI as infrastructure enables faster iteration and more creative decision-making, not just higher productivity.

 

Get full interviews and bonus content for free! Just join the list at DailyCreativePlus.com.

Mentioned in this episode:

To listen to the full interviews from today's episode, as well as receive bonus content and deep dive insights from the episode, visit DailyCreativePlus.com and join Daily Creative+.

The Brave Habit is available now

My new book will help you make bravery a habit in your life, your leadership, and your work. Discover how to develop the two qualities that lead to brave action: Optimistic Vision and Agency. Buy The Brave Habit wherever books are sold, or learn more at TheBraveHabit.com.

Todd Henry [00:00:01]:

Picture this. A young mathematician sits at her desk, pen in hand, contemplating a revolutionary machine. She's been studying the architecture, the blueprints, understanding how it processes information. And then she has a thought that stops her cold. She writes, the engine might compose elaborate pieces of music. It could produce graphic art. She pauses and considers the implications. Could this machine have actually think? Could it be creative in the same way that humans are creative? Her colleagues are skeptical.

 

Todd Henry [00:00:32]:

The machine's inventor himself, her mentor, insists that it can only do what it's been programmed to do. It has no capacity for originality, no spark of genuine creation. But she sees something more. She imagines a future where these machines don't just calculate. They generate poetry. They create melodies. They maybe even surprise us with ideas we hadn't even thought of ourselves. She wonders, if a machine can manipulate symbols according to rules, why can't those symbols represent anything? Numbers, of course, but also musical notes or words or visual patterns.

 

Todd Henry [00:01:04]:

Then she writes something that will echo for a long time to come. The engine might compose elaborate and scientific pieces of music of any degree of complexity. Here's the incredible thing. Her name was Ada Lovelace and The year was 1843. The machine that she was contemplating was Charles Babbage's analytical engine. It was never fully built. But her vision that we're living with today, nearly 200 years later, the conversation we're having about AI and creativity, well, Ada started it before the light bulb was even invented, before the telephone, before recorded sound itself. Today, we're going to sit down with the people living inside of Ada Loveless Vision.

 

Todd Henry [00:01:55]:

A researcher who's been pioneering the intelligence that she imagined. A journalist who's been tracking how intelligence is reshaping our world. And two creators who are building tools to strip away the tedious work so that artists can get back to what makes them human. The question that ada asked in 1843 is the same one we're asking now. When machines can do what we do, what does it really mean to create? This is Daily Creative. Since 2005, we've served up weekly tips to help you be brave, focused and brilliant every day. My name is Todd Henry. Welcome to the show.

 

Vasant Dhar [00:02:36]:

I saw this interaction between a physician and a system called Internist. We were discussing a medical case, 1979. Going back and forth, the machine was asking the expert, Jack Meyers, some questions. And at some point in the interaction, he said, why are you asking me this question?

 

Todd Henry [00:02:53]:

That's Vasant Dhar. He's a professor at the Stern School of Business. And a decades long Expert in artificial intelligence. He also is the author of a new book called Thinking with Machines.

 

Vasant Dhar [00:03:04]:

He's puffing on his cigar. Internist came back and said, because the evidence you've given me so far is consistent with the following hypotheses, and this question will help me discriminate between the top two. And I was just like standing there and said, holy smoke, how is a machine doing this? This was like way back in 1979, and I realized that people were building very sophisticated reasoning systems at that time. The vocabulary of AI was reasoning, understanding, planning, thinking. And so our goals were lofty, even though our tools were somewhat limited. And that was the era of expert systems. These systems built with a great deal of effort. Internists that taken 10 years of collaboration, they performed really well.

 

Vasant Dhar [00:03:44]:

There were some great successes. But that paradigm ran into a wall because it required humans to specify their knowledge. And we know more than we can specify. And at that time we focused on expertise. Let's try and define domains with expertise like medicine or engineering or tax planning and build these high performing expert systems. It ran into a wall because experts use common sense, they go into other areas. And so these systems would break down at the edges. Now, in the late 80s, early 90s, machine learning came to the rescue.

 

Vasant Dhar [00:04:14]:

And that's when I went to Wall street and the emphasis of AI shifted to prediction. And so forget about those hard problems of reasoning, understanding and stuff like that. And so the field pivoted towards prediction. Machine learning was then followed by deep learning, where the machine could perceive the world the way we do, directly through vision, through sound, through language, now even through smell. And so intelligence moved upstream. And that was a big deal because you didn't now have to translate data for the machine to understand. It could perceive the world directly. Huge advance.

 

Vasant Dhar [00:04:48]:

The latest paradigm shift is what I call general intelligence, where the machine knows something about everything and that something is getting deeper. Now, if you were to ask me what's the one thing that distinguishes this latest paradigm shift from earlier paradigms in AI, I'd say that it's the dissolution of the boundary between expertise and common sense. So in AI, we'd always said common sense is too hard, and people had tried to teach the machine common sense and failed. And that had always been a vexing problem for AI. So we focused on expertise, and that's the problem that modern AI solve, right? These chatbots, they don't distinguish between expertise and common sense or whether they're reasoning about something at a deep level or a shallow level. They just make stuff up they generate responses and the distinction is broken down. And that's also led to the widespread adoption of AI. Everyone can relate to it.

 

Vasant Dhar [00:05:38]:

I was giving a talk the other day at an insurance company. The security guard looks at me and says, so, professor, what do you think about ChatGPT? And I said, that's a deep question. What do you have in mind? And then he got into how he. And I said, do you use it? He says, yeah, I write poetry with jazz overtones. And I said, oh, that's really interesting because jazz came into its own in the 30s with amplification of bass and stuff. His eyes lit up and he said, yeah, and it helps me to write poetry. And I said, are you getting better? And he looked at me strangely, what do you mean? I said, is your poetry getting better by using it? And he said, that's an interesting question. I said, keep track of it.

 

Vasant Dhar [00:06:11]:

Keep track of your poems now, three months from now, six months from now, and see if you could have gotten better. And by the way, this lesson applies to leaders in general of people running businesses that are you trying things out? Are you seeing whether you're getting better? Are you benchmarking how humans are doing at their current jobs? So this 10 minute conversation with the security guard before someone came and got me could have gone on for a lot longer. Everyone can relate to it. And that's what's unique about this current AI paradigm.

 

Todd Henry [00:06:37]:

You just hinted at something that I think, and it relates to a theme in your book which you call the compounding edge, that those who are currently using AI and are, like you said, they're setting benchmarks and they're measuring their progress and they're figuring out how to use it in more new and interesting ways. These alien brains that can, Kevin Kelly calls them, that we're incorporating are going to have a compounding edge over time. Talk about that compounding edge and how maybe that applies especially to those who are in the creative fields, design, art, who maybe think that's a uniquely human thing. We, we can't apply AI to these fields because this, this requires a human to do this.

 

Vasant Dhar [00:07:13]:

This notion of an edge applies to everything. And I use this example from Roger Federer's speech at Dartmouth College, the commands from speech where he talked about the fact that he won 80% of his matches, but he said the percentage of points that I won, barely 54%, barely better than even. And that really resonated with me because I have created a machine learning hedge fund. And one of the things I realized was that sports and Finance are two sides of the same coin, very competitive, very no one's leaving anything on the table. And you need to have that slight edge over the opponent and that edge just multiplies, right? So in Roger Federer's case, if he's like slightly better at every point, then over the course of a match the impact compounds. And I found that was the sort of what was happening in finance as well. If I could do every trade just slightly an iota better, then it would compound. And the same applies to any area of business, right.

 

Vasant Dhar [00:08:06]:

When you're doing transactions, if you're just doing something slightly better, right? If you're underwriting process in an insurer, in the insurance company is slightly better. If your creative process is slightly better, that edge just compounds over multiple transactions. And that's what the lesson was, that you don't have to be perfect, all you have to be is slightly better than average or slightly better than some benchmark. And this is a huge implications for leadership where all you're looking for is that slight edge.

 

Todd Henry [00:08:33]:

You asked an interesting question of that security guard. You said, is your poetry getting better? Which I find to be a really interesting question because I think what you're hinting at there is you have to have the right metric, you have to be asking in the right way. You could have asked, are you getting faster? Are you writing more poems? But you didn't, you said, are you getting better? That to me is a very important distinction because I think a lot of leaders, when they look at AI, they think, oh, we're going to be able to do so much more work if we're going to be able to produce things so much faster. But that's not really maybe what they should be looking at. How do we determine how we should measure whether AI is in fact making us better? What do you recommend that we be thinking about?

 

Vasant Dhar [00:09:17]:

Oh, that's a great question. I asked him whether he was getting better because poetry is a craft, that he probably doesn't care whether a poem takes him a day or two days or a week. He wants that final product to be really good, right? Something he's proud of that will resonate with his readers. And so it's a craft. It's not to say that faster isn't better either. Sometimes, you know, if you can do so many more things per unit time, again, that edge can multiply even by doing things faster. And in my book I talk about this multi agent system called the mother and bot, which is designed after one of my colleagues to do valuation of companies When I first started the project, my thinking was that this would systematize long term investing, make it algorithmic. That was my thinking as a short term AI based based portfolio manager, because I had done high frequency trading, short term trading, there's lots of sample size and you can get good statistics and know that you're doing better than some benchmark.

 

Vasant Dhar [00:10:15]:

But to me, long term investing was one of these things that was inherently human until ChatGPT came along. And so I teamed up with my colleague and we had mused about this 10 years ago, but didn't think it was a great idea. But with ChatGPT and these language models, we decided to revisit that original idea and, and my initial thinking was that it would do. We could run the bot on the S&P 500, which would be physically impossible for a human to do. It would take you months to generate reports on all those companies and systematize decision making. Just remove the human from the loop. And now that I've built it and I see how it operates and the kinds of reports that it generates, my thinking is that yes, it might actually systematize decision making, but to me, the more interesting part is that it'll make analysts so much better by enabling them to do scenario analysis so its thinking process changed and the numbers changed. That's impossible to do at the moment by humans.

 

Vasant Dhar [00:11:10]:

Right. We just don't have the cognitive capacity to be able to do that. But to me, what I'm realizing is that a tool like this will make analysts so much more productive, you know, because they can do 10 reports in a day as opposed to one report every two or three weeks or a month. Right. So orders of magnitude more productivity. So being able to do things better is one sort of potential, but also faster because it just enables so much more work.

 

Todd Henry [00:11:39]:

The example you just gave is a very interesting one because I think what you're talking about is testing assumptions or testing hypotheses, and in some ways giving yourself the freedom, the flexibility to push the bounds of your current assumptions and to play around with ideas that are impractical and honestly would probably be a waste of time in many cases, right, to test those. But with artificial intelligence, we can ask those questions, explore those different permutations without that waste of time. Because we're not wasting our time, because we're allowing the intelligence to do that for us. So we can maybe test an assumption that we know probably isn't going to prove useful. But what if it does? That would be a waste of time if we were to Spend our time doing that. But to let an AI do that is. It takes seconds maybe exactly. To deal all the scenarios.

 

Todd Henry [00:12:30]:

You talk about this in terms of what you call framing questions. The ability to frame up the right kinds of questions. How can leaders get better? Or what are the qualities of a good framing question? And how can leaders get better at asking framing questions that help them get the most out of artificial intelligence?

 

Vasant Dhar [00:12:47]:

You know that's a great question and it's been one of the most vexing problems in building this bot is to get it to ask like if interesting framing questions like it's master, right? Because when the mother valued Nvidia two years ago he said the first question he asked was is AI an incremental or disruptive technology? Brilliant question, right? Almost obvious in retrospect, but it was a great framing question because incremental technologies are easier to define, the markets are easier to define, whereas disruptive technologies are much more uncertain, their scope is much more widespread. And then you can start drawing analogies with is it like electricity? Is it like the Internet? And that's what enables you to ask the right kinds of questions. And to me that's like still the strength of humans that computers just aren't very good at those kinds of framing questions where humans are really good at it. Can machines actually help humans ask better framing questions? Possibly by helping humans nudging their thinking in the right direction. But to me that's like still largely a human thing. Will the machine get better at asking framing questions? Possibly. But at this time I view that as largely a human driven kind of exercise where the machine is a partner. But it's not framing the problem for you.

 

Vasant Dhar [00:14:07]:

Now that doesn't mean that it can't do that. It may still come up with some framings that you find interesting. It's just that I wouldn't rely on it to do that. That's where I would try and exercise my human creativity and judgment.

 

Todd Henry [00:14:21]:

Vasant Dhar's new book is called Thinking with Machines and it's available now wherever books are sold. So I love Vasant's perspective about using AI as thinking partners. But how exactly do we do this? How are the best and most productive creative pros using AI to help them do better work? To help them get better at their work? Well fortunately we have an expert to help us figure that out. Christopher Mems is a technology journalist and he's also the author of a new book called how to AI.

 

Christopher Mims [00:14:52]:

I just want to highlight that artificial intelligence implies that we have created a human like or even an animal like intelligence. But in silico, right, with chips and software, we have created that at all. If you really dig into the guts of it, we have created an alien intelligence. This is the pink slime of intelligence. So that's why I call it synthetic intelligence. It's. There are no natural flavors here. It is, it is artificial all the way down.

 

Christopher Mims [00:15:18]:

It is a totally human discovered, I would say more than made thing. And it is. So I think of it as synthetic intelligence.

 

Todd Henry [00:15:27]:

One of the great threats is that, like you said, we anthropomorphize it, we treat it like a human. We ascribe to it qualities that it simply doesn't have.

 

Christopher Mims [00:15:34]:

Yeah.

 

Todd Henry [00:15:35]:

This intelligence doesn't have empathy for us. It honestly doesn't care about us at all. But it can feign empathy, which is, I think, dangerous. If we invite that in to our process or our lives.

 

Christopher Mims [00:15:48]:

Yes, it can be dangerous. It can also be powerful in a good way. There was a recent Modern Love column in the New York Times by a woman in late middle age who went through a divorce. She had adequate support, so she wasn't like a super lonely person. She was checking in with her friends every day. But chatgpt, by validating her feelings, helped her get over that breakup. That's a positive use of empathy and AI. Obviously where it gets dangerous is when we, when it can be used to manipulate us.

 

Christopher Mims [00:16:18]:

Right, by. By getting into the part of our brain that thinks, oh, this thing cares about me, it understands me.

 

Todd Henry [00:16:23]:

How can a creative professional leader, a marketer manager, a designer, how can we be thinking about the role that AI plays, should play in our lives as we're going about our work every day?

 

Christopher Mims [00:16:34]:

Sure. So it has everything to do with AI's current capabilities and limitations. This is a concept that the academic Ethan Moloch calls the jagged frontier of the ability of AI. And what he means is it's very spiky, like it's great at some things, it's bad at other things. I like to play a little game called good AI. Bad AI. So bad AI trying to use current generation AI video generation models for finished advertising output. McDonald's just did this.

 

Christopher Mims [00:17:00]:

There was such a backlash that people had to, that they had to take down the video. And it's partly because there were just these weird little AI slop things in the video, but it's also because even though this is a commercial product, the advertisement that is advertising still is about humans creating a thing that other people connect with. And if we know that it's actually Just the pink slime of an artificial or synthetic intelligence. We're not going to connect with that. Just not in this day and age, maybe in future generations. So that's bad AI, Good AI. AI can be used for, at the brainstorming stage, it can be used for concept art. There's plenty of evidence that if you ask people to come up with a list of ideas.

 

Christopher Mims [00:17:40]:

This is a business school exercise. People do all the time that, you know, obviously two people are better than one person. If you're asking people to come up with a list of ideas, depending on the task, one person plus an AI can be as good or almost as good as two humans. And best of all is two humans and an AI because the AI can inject all kinds of weirdness and it's just kind of like random number generator when you're trying to come up with new ideas. So AI is not. One of the principles of the book is like AI is not going to be creative on its own. It can help you be creative. Now that's the third bad AI thing, though I would say is you got to be careful that there is evidence that the ideas that, that this AI is giving to you, it might be giving to somebody else who's using a similar prompt.

 

Christopher Mims [00:18:28]:

And that has all to do with the mathematics of this AI and the sort of distribution of the answers it's giving you. So can help you come up with more creative ideas. But are they more original? I don't know. You better check. You better see what the prior art is out there.

 

Todd Henry [00:18:41]:

I think that's one of the challenges, right? Like you could, for example, you write columns, you write books, I write books, you've written seven of them. I think one of the challenges is if you're using AI in the wrong way, that you can end up producing a product that is very similar to another product. Right. If you're not using it in the way you're describing business sensitive, using it as a replacement for your thought process. However, to your point, also using AI to push you out of the bounds of your normal mental construct. Right. To get you to think about stimulus you wouldn't otherwise consider can be incredibly valuable. And that kind of plays into what you call the second law of AI, which is that experts benefit the most from AI.

 

Todd Henry [00:19:18]:

Why is expertise crucial when using these tools? What makes experts uniquely positioned to be able to use them more effectively?

 

Christopher Mims [00:19:26]:

Yeah, the reason experts can get more out of generative AI than amateurs or entry level folks is twofold. Number one, you're an expert. If it hallucinates or misunderstands something because sometimes it's not a hallucination, sometimes the source material it's drawing from is just wrong. I have had AI deep research tools faithfully reproduce research made by humans. And when I dug into it, the human was wrong. So it's not the AI's fault at that point. So, number one, experts have expertise, right? By definition, right? This is why expert coders get the most out of coding tools, right? Because they can look at what the AI has generated and be like, and read it quickly and be like, that's wrong or that library doesn't exist. Whatever.

 

Christopher Mims [00:20:10]:

The second part is that experts know what questions to ask. They know how to push the model and ask deeper and deeper follow up questions or provide more of what practitioners call scaffolding. And I gave an example in the book, but this is a real example. All right? So I'm a journalist. If I already know where I'm going, because I know what the potential subject of an article is, I can ask these very detailed, specific questions that push the AI deep research tool way outside of where it would give me an answer if I were just asking a very basic question. And so in that way, it can have a lot of utility when I'm trying to push way down that long tail of the distribution of what's on the Internet to find that like very, like narrow, interesting, let's say, more valuable information that I'm trying to surface for my readers. But it's because I have that expertise, because I've already done some of the research.

 

Todd Henry [00:21:04]:

And this also in some ways connects to what you call the ninth law of AI, which is that context is king.

 

Aden Bahadori [00:21:09]:

Right?

 

Todd Henry [00:21:10]:

The context within which we ask questions. The short term memory that the AI has is really important because it allows us to go deeper and to focus the tool more effectively. What's some advice you have for people about how to leverage context more effectively when they're using these AI tools?

 

Christopher Mims [00:21:26]:

Yeah, so previously it was all about, what did you feed the AI? First, let's say you're a marketer. This is a very common use case for generative AI. You might give it everything that you've written in the past year or the entire contents of your book. And then you say, okay, you've digested that. Now write about this topic. Given these inputs in my style, in the style of what I just gave you, that is giving the AI context so it can write like you. And it's surprisingly good at that for first drafts anyway. But the AI companies are getting Savvier about this.

 

Christopher Mims [00:22:01]:

So Anthropic in particular now offers what are basically like pre written premixed recipes. So in the old days you'd have to go search the Internet for like very detailed prompts if you wanted certain result in anthropics. No, we will give, we will like just preload the AI's memory with all of these very detailed prompts that we know work. And then there's a few slots where you drop in whatever inputs. And it doesn't just have to be text, it's not just about writing anymore. These AIs used to be, these LLMs used to be really bad at math and now they've put these kind of hooks in them into actual regular software so that it can do math. Right? Turns out if you want to do math, you need just plain old arithmetic to do it. LLMs are bad at.

 

Christopher Mims [00:22:47]:

So the hack is don't teach the LLM to be better at arithmetic because you can't really. It's give the LLM the ability to, to call on a piece of software that can do the arithmetic for them. That's why now Claude is great at handling spreadsheets. And I'll tell you, that's going to be revolutionary. The world still runs on Excel. We don't think of that so much as creatives, but that's huge.

 

Todd Henry [00:23:07]:

You talk about AI as scaffolding, right? It's kind of built into a lot of the products that we're using. It's almost invisible in some ways in some of the products we're already using. But that seems to me like a use of AI that is not replicative of the creative process, but really just gives you more efficiency in the parts of the process that you where you want to remove friction, but it allows you to keep the friction in the parts of the process where friction is actually valuable. Friction slows us down, it forces us to intuit.

 

Christopher Mims [00:23:36]:

Right.

 

Todd Henry [00:23:36]:

What are some like, in your experience, in obviously your research, your writing, and even in your own use of generative AI, what are some ways that you use AI to remove friction while also preserving the parts of the process where you want to stay connected to the creative process itself?

 

Christopher Mims [00:23:52]:

Yeah, well, I'll give you two examples, but I want to start with video. This is something I've been messing with a little bit more myself too, because I have my own video podcast that I do for the Wall Street Journal and we were just like begging our video editors like, please cut more clips out of these 20, 30 minute interviews we do that we can Just pop out on social media. It's something. The problem was they're busy. Right. It's a low value thing. How many more incremental views is this going to generate? Guess what? There's a million AI clip generators now. And it's not that they're synthesizing the video, they're just deciding, okay, this is probably interesting, let's start and end here.

 

Christopher Mims [00:24:25]:

Or this is built more and more into tools that incorporate AI into the workflow. Descript, for example, which a lot of people use. Phenomenal. Better and better. Yeah, because it's directly incorporating AI in my own life. And sometimes I'm hesitate to share this because it's like how many people have the needs of a journalist? But I think a lot of people have the need to manage knowledge the way that a writer does. In my own world, Notebook LM keeps getting better and more useful. And I'll give you like really three really quick use cases.

 

Christopher Mims [00:24:55]:

Number one, I haven't read a report that was longer than two pages in a year because I don't have the time. So I will dump it into Notebook LM and I might have it generate like an, like a podcast between two people, both of whom are AI generated so I can get a quick summary or I will read its summary. And it's not that I'm just like reading the Cliff Notes version, it's the fact that I can then have a conversation with the document. And so I think we are moving toward a more Socratic way of interacting with information people. We forget there's a whole fascinating book on how weird reading is. Literacy is weird for most of human history, 99% of the population, you had to be a scribe to read, right? Now we take it for granted that like we have a 90% literacy rate worldwide. Reading is strange. And I don't actually think that's the default way that we should be taking on new information.

 

Christopher Mims [00:25:53]:

So AI gives us the ability to have a, an, an auditory right or a back and forth typing conversation with documents where we can say, oh, that's intriguing. What about this? Does it say anything about this? What? Yes or no on this topic? And because Google has made NoteBookLM very grounded, which means that it is much less likely to hallucinate. It's only pulling from the documents you give it. It is a great way to really. I think of it as handing it to an assistant and being like, summarize this for me and I'm gonna ask you questions about it. And that is, it's such an unlock for me I hate that because it's such a business word, but it's. There are so many things that we do that are like communication theater, where people are sending us too much information. It's really.

 

Christopher Mims [00:26:37]:

You should have just given me the bullet points. And hopefully we'll get to a point where people know that everything longer than a page that they're sending you is going to get digested by AI anyway. And hopefully we'll all just start sending each other the short version in the first place, honestly.

 

Todd Henry [00:26:54]:

Christopher Mims new book is called how to AI. It's fantastic. And it's available now wherever books are sold. In just a minute, we're going to come back with an interview with a duo who have created something that I think is a phenomenal use case for AI in the creative space. They've created a tool that takes the mundane tasks out of the way for filmmakers and allows them to focus on their process. We're going to talk about what it is, how it works, and how it applies to us whether you make films or not. How you can think about AI as infrastructure for your work rather than replacing your work. We'll be back with that in just a minute.

 

Todd Henry [00:27:38]:

Stick around.

 

Aden Bahadori [00:27:54]:

AI is a workflow tool at its core. It assembles the rough cut, if you will, gathering all the raw media, the script, and then it goes through our algorithm and puts together the initial assemble edit. It's. The assemble edit was not highly creative. It's just a foundation edit that we as editors then build on top of. So that was. Our goal, is to build a utility, not a generative tool.

 

Todd Henry [00:28:19]:

That's Aidan Bahadori, and the tool he's talking about is Tachi AI, which is a film editing tool that helps editors get to the creative part faster.

 

Aden Bahadori [00:28:28]:

So just to allow us to see not just rushes, but also like something assembled to see if the scene is working, how it's working. And as editors, we're restricted with time. It's just to give us some options quickly to see what we are able to do with the scene.

 

Todd Henry [00:28:44]:

So in some ways what you've created is a way to get to a general rough draft right, of an edit, just so that you can then start making creative decisions. And I think, I would assume I come from a music background, so I know there's a lot of editing that happens with vocals or editing that happens with instrumentals, or like choosing maybe two or three of the best takes that don't have a lot of mistakes on them or something like that, which takes a lot of time for engineers to do that, or for, in your case, for editors to do that. What prompted you to start creating this product? What was it that. What was the genesis of the idea for this for you?

 

Aden Bahadori [00:29:18]:

For me, it was in late night session 2012. I was in a backlot suite working, and this scene wasn't working. And I had so much more to go. And I only had a few weeks to finish the film. And I wished for that magic auto edit button. And that's when the genesis started for me, 13, 14 years ago in a dark suite. And I thought if there was a way to get the technical edit, get a scene cut together so I could just look at it and remove that technical element out of my mind and keep that creative juice flowing, that would have been amazing. That's how it all started.

 

Aden Bahadori [00:29:55]:

Because going through all the rushes, trying to figure out technical hurdles, reading the script, the continuity reports, that just drains me as an artist. That's technical. I'm creative person. So I wanted to have more creative time, less technical time. And that's how it comes. That's how it all came to be.

 

Brett Granstaff [00:30:14]:

I was going to say, just jumping on that for me too, it's. It had to do with creativity. Right?

 

Todd Henry [00:30:18]:

That's Brett Grandstaff. He is Aiden's partner at Tachi AI.

 

Brett Granstaff [00:30:23]:

Because as a producer, I hate to say this, I've never actually watched any of my films after I finished them because I've seen them so many times and I'm just. I'm over it. It just. It's done. And so when you're going through and you're editing and you're seeing these scenes over and you're trying to. Does this take work? Does this take work? And by the time you've watched 20, 30 takes of the same stuff, you're just. You're so mentally drained. It's like, okay, let me just finish it and move to the next.

 

Brett Granstaff [00:30:44]:

The creativity is just gone. And so for me, it allows editors, producers, directors to be more creative because we can get through all the tedious work and get to the creative part. And also wearing my producer hat, it on set, I can't tell you how many times, oh, you need to reshoot something because it doesn't work because you get in the edit and you realize, oh, no, we filmed this wrong. So then with this, you have this on set, you film during the day, you hit the button, wake up the next morning, you can actually see a rough cut of the scene and say, oh, no, that eye line doesn't match. Hey, we need to reshoot this one part, or especially on a heavy special effects type movie, it's just. It's a money saver, it's a time saver, and it allows you to be more creative. So that's kind of the genesis of it.

 

Todd Henry [00:31:24]:

And I think that really kind of gets to the core essence of what you're building is I think that the people, obviously a lot of filmmakers are probably skeptical of AI, and we've seen all the lawsuits coming out about uses of AI and. And even the idea of, are we going to regulate AI and how is that going to play out? We're only at the very beginning, I think, of that entire conversation. But what you're doing is something very different. So what is the difference? How do you describe the difference between automating creativity and supporting creativity? So I feel like what you're building is more of a supporting platform for creativity to enable you to get to the creative process quicker, as opposed to automating creativity, which is we're just going to create things.

 

Brett Granstaff [00:32:02]:

Let me jump in real quick. I always like to say that I was actually. I went to NYU as an undergrad and I was one of the last classes. We actually cut real film on a Moviola and we had to cut and splice and pay. And then after that first. First year, I was like, this is awful. Who wants to do this? And then the second year, they moved us to Avid and I was like, oh, my gosh, it's like, this is heaven. And so this is the next step in that.

 

Brett Granstaff [00:32:22]:

It's like you're going from the Avids and the Adobe Premieres into kind of touch and helping. But I like to say that for us, it's the equivalent of AutoCAD for architects. Right. Like, they had to draw by hand and use scales and they mess something up. Oh, gosh, I got to go back and redo the whole draft. Right now they have AutoCAD. They can change things in two seconds and allows them to be like, oh, I can change this. I wonder what this looks like.

 

Brett Granstaff [00:32:41]:

And they can change something and it's really quick and easy and it allows them to have more creativity and work faster and allow them to do more projects. And so that's how I see Tachi for editors.

 

Aden Bahadori [00:32:50]:

Yeah, it's human centric AI. It's a utility. It's not here to replace. I still have to go through the rushes. I still have to read the script. That's without replacing any of that. What it does is just gets you closer a little bit faster. Without having to go through the entire technical process.

 

Todd Henry [00:33:09]:

So is there any concern that we always talk about? Oh, I think it was in 1965, they said, oh, by the year of 2000, we'll be working four hours a week because technology is going to. Is there any concern on your part that this is only going to amp up expectations on editors? Okay, now that we know that you have this amazing automated assistant who can do all this work so much faster, is there any concern on your part that's going to just add to the workload?

 

Aden Bahadori [00:33:39]:

I don't think it. I'm not concerned about that. Editing is a fluid process. There's no perfect way, like set way where you're like, this meets the standard essentially because you can always finesse, you could always add. And having A timeline of 10 weeks, 20 weeks, 2 years is a great setting. So if you give me 10 weeks and I could utilize Tachi AI and make the best film or TV show episode, then, then that's that, that's what I'll use. But what we've seen is technology does cut down on our timelines and it increases expectations. That's just a natural factor in any industry.

 

Aden Bahadori [00:34:14]:

Right. So when you give AutoCAD to architects, the clients want to turn around faster. Right. And then they're going to have a thousand different changes. That's just the nature of the business. And, and I think just by allowing more creative times, it would really help the story. And that's something that suffers when you have too much footage, not enough time, lots of expectations. So this is just really to help us out as editors get to the core of the story and tell the story with the real emotions without having to make sacrifices.

 

Todd Henry [00:34:53]:

What I love about what Aiden and Brett are doing with Tati AI is that it feels like they're using AI as a support, supportive tool for the creative process. It's infrastructure again. It's taking away the grunt work, the unnecessary in the weeds work that takes your mind off of what you're trying to do. They're preparing you to be able to do the very thing that you are wired to do. And so I think that whether we're talking about the conversation with Aiden and Brett or with Christopher Mims or with Besant Dharma, I think that common thread was in every conversation. We need to be purposeful in how we think about using AI to support our process, not to replace our process. As I said in last week's episode, we need to own our creative process. We can't outsource that, but we can use AI as a thinking partner to help us get to a place where we're evaluating more options, where we're absorbing more stimulus, where we're exploring more permutations, where we're of our thought process.

 

Todd Henry [00:35:56]:

If we see AI as an alien brain to be leveraged to be used to help us get to places we never could have gone otherwise, that is when it becomes immensely valuable in creative work. Hey, thanks so much for listening. If you'd like full interviews with all of our guests today, it's absolutely free. Just go to DailyCreativePlus.com Enter your name and email address and we'll send you a feed where you can listen to to those interviews absolutely free. That's Daily Creative Plus.com My name is Todd Henry. You can find me my books, my speaking events, and all my other work@todhenry.com until next time. May you be brave, focused and brilliant. We'll see you then.

Christopher Mims Profile Photo

WSJ columist and author

Christopher Mims is a Wall Street Journal technology columnist whose work makes complex tech trends accessible and actionable. Mims is the host of the WSJ Podcast “Bold Names” and the author of Arriving Today: From Factory to Front Door -- Why Everything Has Changed About How and What We Buy. He has won a SABEW award for commentary and previously worked at Quartz, Scientific American and Wired.

Aden Bahadori (CEO) and Brett Granstaff (COO) Profile Photo

TACHI AI

TACHI AI is a media technology company dedicated to building the next generation of tools for filmmakers and visual storytellers. Founded by a team of veteran filmmakers, data scientists and software engineers, the company is focused on utilitarian AI platforms that streamline post-production workflows while preserving artistic integrity. Its flagship product - TACHI AI editing platform - is engineered to reduce technical overhead and unlock creative potential for professionals across film, television, commercial and digital media. See more at www.tachi-ai.com.

Vasant Dhar Profile Photo

NYU Stern professor, host of the acclaimed Brave New World podcast and author of the new book, Thinking With Machines

Vasant Dhar, Ph.D., is a Professor at NYU's Stern School of Business and the Center for Data Science, and one of the world's leading authorities on prediction, data science, and trust in AI. He is also the creator and host of the “Brave New World” podcast and newsletter, where he interviews Nobel laureates, CEOs, and thought leaders about technology, ethics, and humanity. His research has been featured in The New York Times, The Wall Street Journal, Financial Times, Wired, and MIT Technology Review.