Tips for the AI Executive: Don't Study, Play.
At my core, I’m a doer. I’m a hands-on builder. But like most tech leaders who move into leadership positions, you use more Excel, PowerPoint, and email than actual code terminals.
Tech is moving so fast now. I need a quick way to get educated on just about anything. Coming into a new team, there is a lot of tech that I didn’t know. So I have developed a process for how to learn quickly. I don’t study. I play.
As part of a new app, our team was capturing voice and transcribing to text. The tech was the Whisper model available on Hugging Face.
1. What is this thing
Simple questions to Gemini are obvious, but it still surprises me how many people ask questions in meetings that a single prompt could clear up. I don’t just ask what a technology is. I try to go a bit deeper and ask about alternatives and risks.
The tradeoffs for Whisper are actually interesting, and the decisions for our use case do make sense for a batch-oriented and on-device transcription flow.
2. Dig a little deeper
There are so many amazing tools at our fingertips, but one of my favorites is Google’s NotebookLM. I simply ask for information about Whisper for voice transcription. NotebookLM finds sources and imports them. Then the magic happens: I ask for an AI podcast.
The podcast is usually about 15-20 minutes. Sometimes funny, sarcastic, and amazingly real sounding. The hosts walk through the topic, call out important points from specific articles, and make the knowledge sticky. It also maximizes my commute time.
3. Play with it
Finally, I use Google Colab to create a demo. There are a hundred ways to do this now, but I like using Colab as an old-fashioned code editor. I give a prompt to Gemini inside Colab to make sure it downloads the latest versions of everything, then explain what I am trying to learn and ask it to be very verbose with comments.
The process doesn’t always go great, but that is the play. Even the frustration of things not working is part of the process of learning. Getting API keys to work, understanding how the model installs and updates, and finally getting the model to transcribe samples is all part of it.
Once I see how things work at a code level, I start playing with vibe-coding tools to build out more complete examples. For Whisper, I vibe-coded a silly app where I could speak into my computer about a meeting and then scan the transcription for contacts and actions to record.
The point of all this is getting deeper. The tools we have available now make learning so much easier. Playing can be frustrating, but that friction is part of the learning.