
February 26, 2026, 1:00 pm - 2:00 pm ET
Companies are investing millions in AI tools for software engineering, betting on a new wave of productivity. Yet, many leaders struggle to prove these tools deliver real value, facing the risk that the massive spend is "all hype."
Join AI expert and technical instructor Manu Mulaveesala for a deep dive into a data-driven framework with the latest cross-industry research.
After attending this session, you will understand how to measure the true impact of AI on engineering performance.
Your AI tools might be increasing pull requests, but a Stanford study of 120,000 developers shows they could also be tanking your code quality and multiplying your rework by 2.5x. Are you measuring what truly matters?
What You'll Experience
In this complimentary 60-minute webinar, Manu will walk you through:
- The Widening Productivity Gap: Data is showing a median 10% productivity gain from AI, however, there is a widening performance gap between successful early adopters and strugglers. This is creating a "rich get richer" effect where top performers compound their gains while others fall further behind. Learn why you must identify which cohort your organization falls into to correct this issue.
- Quality Over Quantity in AI Usage: Deconstruct the finding that high token usage has only a loose correlation with productivity. We will explore the "Death Valley" effect (whereby teams with moderate token usage sometimes perform worse than those with less) and explain the critical conclusion: the quality and context of AI assistance are far more impactful than sheer volume.
- The 'Clean Code' Amplifier: Discover why the quality of AI usage is so critical. Research is revealing that there is a strong correlation between codebase hygiene (modularity, documentation, tests) and achieving significant productivity gains from AI. You will learn how unchecked AI usage accelerates codebase entropy and why training engineers on when and when not to use AI is critical.
- A New Framework for Measuring True ROI: Move beyond misleading vanity metrics like pull request counts. The ideal, measuring direct business outcomes, is fraught with noise from confounding variables like sales execution and macro-economic conditions, making a focus on engineering outcomes essential.
- AI Maturity Benchmark: Discover the AI Engineering Practices Benchmark, a new, open-source methodology for scanning codebases to detect "AI fingerprints" and understand maturity. We will outline the levels of adoption, from personal use to agentic orchestration, and explain why leaders must understand how their teams are using AI, not just if they are using AI.
At the conclusion of the presentation, Manu will answer your questions during a live Q&A session.
Who Should Attend
- VPs of Engineering & CTOs
- Engineering Managers and Directors
- Technical Leads and Principal Engineers
- Heads of Developer Productivity & Platform Engineering
- Leaders evaluating or managing AI tool adoption and spend
- Software Developers, Technical Architects, and Engineers
Browse our Generative AI Training for Developers for public courses and private, customized training.
For a related free webinar, view Manu's Context Engineering session.
About the presenter:
Manu Mulaveesala is a veteran technical instructor with more than a decade of consulting, development, and teaching experience in Artificial Intelligence, Data Science, and Machine Learning. During this time, he has taught more than 5,000 students spanning more than 100 organizations, 20 countries, and varying technical backgrounds. Manu provides a dynamic and adaptive learning environment and enjoys tailoring trainings to clients' real-world projects and goals.