It’s not every day that you get to chat with someone whose work has influenced countless software developers worldwide—but today was different. I sat down with Chris Butler, a chaotic good product manager, writer, and speaker responsible for managing GitHub’s AI and Productivity team.

Chris Butler

“Remaining customer-obsessed is still the key to product management. If you’re not creating a solution for someone’s problem, you’re likely just building something for the sake of it, which is not a good approach.”

“Remaining customer-obsessed is still the key to product management. If you’re not creating a solution for someone’s problem, you’re likely just building something for the sake of it, which is not a good approach.”

With previous stints at prominent tech firms including Microsoft, Waze, KAYAK, Facebook Reality Labs, Cognizant, and Google, Chris shares invaluable insights from his extensive background in both product leadership and operations.

Throughout our AImpactful conversation, we delved into Chris’ views on harnessing chaos to drive positive change, building high-performing teams, and nurturing growth across industries.

Q & A

Q. Throughout your career, you’ve worked on numerous projects involving AI and machine learning. Can you share with us some behind-the-scenes stories or interesting projects that had a significant impact on your journey? Our audience would love to hear about the challenges, successes, and lessons learned from your real-world experiences.

A. During my time at Facebook Labs, I worked on the Portal device, which is a video calling device designed for TVs and as a standalone unit. One of the projects I was involved with explored the potential of using facial recognition to personalize the device’s functionality. This presented interesting challenges and required us to be highly technical in our approach to model development.

For instance, we could specify parameters such as the minimum number of lumens for room brightness or the minimum number of pixels required for accurate face recognition. However, what we truly wanted to focus on were the acceptable failure cases and their boundaries. This led me to a valuable lesson I learned during my time at Google, which is the use of design policies. These policies guide us in seeking nondeterministic behavior that aligns with our goals while avoiding unacceptable failures.

Another project I’d like to mention is from my time at a company called Philosophy, where we had a client project with Google and PwC. We aimed to understand field service operations, specifically for gas stations, but the concept could be applied to any on-site service scenario. We experimented with various machine learning systems, including image analysis, summarization, problem prediction based on textual input, and conversational agents within group chats. We conducted user research to gauge the potential benefits for different stakeholders, including dispatchers, field service engineers, and warehouse workers.

Additionally, I’d like to share experiences from my time at Google’s Core Machine Learning Group and my work with a company called Ipsoft, which had Amelia, a conversational agent platform tailored for enterprise use cases. I also founded a startup focused on restaurants, which we would likely call ‘AI for Restaurants’ today. We explored aspects like optimizing seating arrangements, auto-generating text, and enhancing the host and manager experience. These projects offered valuable lessons on effective interface design and information display for different machine learning systems.

Lastly, I’d like to emphasize the importance of patterns in the ‘People Plus AI Guidebook’. I’ll be providing resources and references to around 5 or 6 additional pattern libraries. It’s crucial to consider how we create interfaces and present information within a product to align with different machine learning systems. With generative AI, there’s the default chatbox approach, but there are numerous alternative ways to apply it, ranging from a fully autonomous agent to a tool that obscures non-determinism. Exploring this spectrum and the associated patterns will be a significant aspect of the course.”

Q. What are the key principles and considerations for effective product management, especially when incorporating AI and machine learning models? How do product managers navigate the balance between customer needs, data privacy, and resource allocation?

A. Remaining customer-obsessed is still the key to product management. If you’re not creating a solution for someone’s problem, you’re likely just building something for the sake of it, which is not a good approach. Being a great product manager is a valuable skill. There’s a lot of nuances involved. I recall my early days working at Kayak, specifically focusing on mobile during the time when mobile usage was surpassing desktop usage.

It was an interesting period because we didn’t fully understand how people would use Kayak or similar websites on mobile devices. This was around the time when the iPhone had been released, and we were still figuring things out. One of the revelations was that while people were searching more on mobile, they weren’t buying as much. This indicated that mobile provided earlier access, allowing users to research trips and costs anytime, anywhere. However, when it came to making a $1,000 flight purchase, users were less likely to commit on a small screen without all the tabs and information readily available.

From this, we learned the importance of providing availability and continuity across devices. Users might start their journey on mobile but complete the transaction on desktop. This is similar to the current situation with AI, where we’re still learning how to incorporate machine learning models. Product managers have to pay close attention to the data used for training, considering privacy and consent laws, which will be ubiquitous and vary across regions and states. They don’t need to be data science experts, but they should ask the right questions about the data and consider the need for data labeling and annotation tools.

Another aspect is the cost of training models, which can be substantial. Product managers should be involved in discussions about resource allocation and the trade-offs between maximizing GPU usage and model training frequency. It’s not their role to decide on model architecture or training methodology, but they facilitate conversations with experts to ensure efficient and sustainable use of resources.

Lastly, building trust and interpretability are crucial. Product managers need to collaborate closely with user experience experts and designers to create a seamless experience. At Facebook, for example, we had content strategists as part of every team, ensuring consistent and appropriate wording and terminology. It’s not just about building an app, but also considering how models will evolve over time. AI doesn’t just update itself; retraining and new data are essential. Product managers should be part of conversations about machine learning operations and keeping models up-to-date, fresh, and aligned with the current state of the world.

Image Source: Envato

Q. How do you utilize AI in your daily work at GitHub? Can you share any specific projects or use cases that you’ve found particularly interesting or impactful?

A. I do use generated systems, like Copilot for code completions. I experiment with these tools to understand their capabilities. Honestly, I don’t write much code daily, but I use generative systems to spark new ideas. For instance, I wrote an article about getting just enough confusion, where I used card decks to create random prompts for myself, ensuring I explore diverse thoughts.

I often ask AI to give me 20 ideas in a specific domain, knowing that most will be boring or familiar, but expecting one or two fresh concepts. This approach expands my thinking. Similarly, teammates can act as provocateurs, offering novel perspectives. I also use AI when writing science fiction, generating ideas for story elements or world-building.

Another application I’ve explored is brain dumping thoughts on a topic into a document and then using AI for summarization. Sometimes, the distilled key points reveal a workable draft, indicating potential for long-term writing simplification. We often create lengthy documents as product managers, which few read. AI could help produce shorter, more targeted content.

Envisioning the future of technical and non-technical collaboration, I see potential for AI to streamline current processes. Currently, a product person creates a specification, a designer reviews and creates mockups, an engineer pilots the architecture, and then there’s a technical design document. These components are often disconnected and out of sync.

What if the PRD could auto-generate mockups, which the designer updates based on the desired experience? This, in turn, updates the PRD, and the mockup auto-generates boilerplate code, providing engineers with a starting point for the architecture. The engineer can then output technical design documents for code reviews, further updating the mockups and PRD.

This approach ensures all elements remain connected and consistent. When something clashes or doesn’t make sense, it triggers conversations and prompts actions. AI can also translate technical jargon from code reviews into non-technical language, enhancing understanding. Ultimately, I see AI as a tool to facilitate better conversations between technical and non-technical stakeholders, helping identify common problems and their inherent tensions.

Q. Some of this insights you will share at the upcoming course, “AI Product Design Patterns.”

A. This course aims to equip product managers, designers, and engineering managers with the necessary skills to integrate AI and machine learning models into products effectively. The course is about AI product design patterns, which are essentially about how we integrate models and systems into products. It’s not just about the technical aspects of machine learning, but also about data handling, building trust with customers, and interpreting system behaviors. I’ve been thinking about improving conversations between technical and non-technical people, and this course will delve into these topics. The course will consist of lectures, Q&A sessions, and workshops where participants can apply their learnings to their projects. It’s a 4-week course designed to help participants apply the concepts to their work immediately, especially in conversations with engineering and research counterparts. The goal is to build the best AI or machine learning-powered products possible.

Reveal Quote

“Product managers have to pay close attention to the data used for training, considering privacy and consent laws, which will be ubiquitous and vary across regions and states.”

Reveal Quote

“I do use generated systems, like Co Pilot Intelligence for code completions. I experiment with these tools to understand their capabilities… I often ask AI to give me 20 ideas in a specific domain, knowing that most will be boring or familiar, but expecting one or two fresh concepts. This approach expands my thinking.”

Q. Why is it essential to learn about AI product design patterns?

A.  Gartner reported that less than 50% of AI models made it into production in 2020, and with generative AI, the situation has worsened. The reason is that these models often don’t fit well with the products.

This course is for product leaders, designers, and anyone interested in improving their AI/ML product development skills. Participants will learn how to create value for their customers by successfully integrating AI/ML into their products. They’ll understand and apply common design patterns, including the latest generative interactions, and learn to apply ethical, bias, fairness, and other guardrails to AI/ML products. Additionally, they’ll practice techniques for effective conversations with AI engineers and researchers.

Q. We’ve touched upon ethical implications several times, which is a crucial aspect when discussing AI implementation in everyday life, especially in product management. We’re curious about your current projects. Could you share any plans or insights?

A. While I can’t divulge much beyond what’s already public, like CoPilot, I am working on a project called the “Employee Manual of the Future.” It explores a workplace where AI agents, both synthetic and modeled after people as digital twins, coexist with employees. It delves into the concept of living documents and raises intriguing questions.

For instance, if you can’t attend a meeting, could your digital twin stand in for you? What capabilities and limitations should it have? Should everyone’s digital twin adhere to the same moral compass or values, or should there be room for variation? These questions lead to discussions about organizational values and decision-making frameworks.

During my time at Google, I learned about the importance of moral imagination in technology development. There’s a team called Moral Imaginations, and I can share their latest paper on this topic. They emphasize understanding our values to make more ethical decisions.

As part of my course, I include a moral imaginations workshop where we role-play futuristic scenarios to analyze our decision-making processes. It’s fascinating to consider how our digital twins might reflect different aspects of our personalities, such as execution versus exploration modes. These agents could be tuned to align with our evolving preferences, but the question of long-term accuracy remains intriguing. Humans are constantly evolving, so capturing every aspect of our personalities is a challenging task.

Image Source: Envato

Q. I have one more question regarding the implementation of AI. There are challenges, especially for small-budget or small-team companies. Conversely, we have large companies that are dissatisfied with the impact of their AI initiatives. So, what is your message for both groups? What advice would you give to those starting out or contemplating AI integration, and what lessons can be learned from missteps already taken?

A. In today’s interconnected world, it’s essential to view our solutions as part of broader ecosystems. Startups can benefit from focusing on specific niches and solving problems effectively with AI, gradually expanding their customer base. They might not have the resources for extensive model training, so off-the-shelf or fine-tuned open-source models can be a practical approach.

Large companies often face challenges with experimentation and tend to overly focus on ROI. It’s crucial to encourage rapid experimentation within teams, providing the space and budget to explore without immediate ROI pressure. This shift in mindset empowers teams to learn and maximize impact rather than solely focusing on ROI.

Additionally, large companies should reconsider the dynamics of their R&D teams. Instead of operating in isolation, R&D teams should be integrated with product teams. Product teams have firsthand experience with the problems and can provide valuable insights for experimentation and innovation. By fostering an environment that encourages experimentation and continuous learning, both small and large companies can effectively navigate the challenges of AI integration.

For those starting out, my advice is to embrace the ecosystem mindset and focus on solving specific problems. Explore open-source models and tools to get started without requiring extensive resources. For those who have already taken missteps, reflect on the lessons learned and create space for experimentation. Learn from both your successes and failures, and continuously seek to improve your AI initiatives.

Lastly, remember that AI is an ever-evolving field. Stay informed about the latest advancements, and don’t be afraid to adapt and experiment. By staying agile and open to new ideas, you’ll be well-equipped to navigate the challenges and opportunities that AI presents.

About The Author

Branislava Lovre

Branislava is a Media Expert, Journalist, and AI Ethicist who leverages her expansive knowledge and experience across various media outlets and digital landscapes.

Branislava Lovre

Branislava is a Media Expert, Journalist, and AI Ethicist who leverages her expansive knowledge and experience across various media outlets and digital landscapes.