Artificial intelligence is sprinting ahead faster than anyone can measure how far it has actually come. Consider this puzzle: predictions about AI’s progress have been wrong in both directions. Most benchmarks showed slower advancement than expected, yet one math test called FrontierMath suddenly jumped from 2% to 24% solved when OpenAI announced its o3 model. Measuring AI has become like trying to photograph a cheetah with a disposable camera.
Measuring AI has become like trying to photograph a cheetah with a disposable camera.
The marketing world faces particularly tricky measurement gaps. Privacy regulations, signal loss, and scattered data make connecting ads to sales nearly impossible. AI promises to fix attribution and modeling problems, but ironically, these measurement tools themselves need better measurement before companies trust them. It’s a bit like needing glasses to find your glasses.
Meanwhile, AI initiatives inside companies keep hitting the same wall: nobody can prove they’re worth the investment. Without clear links to business outcomes, funding dries up and executive sponsors lose interest. The winners solve this by tying AI projects to concrete goals like cutting processing time or sharpening forecast accuracy. They speak in financial terms that CFOs understand rather than tech jargon that makes eyes glaze over.
Data quality creates another headache. Generic AI models trained on internet content can’t capture the weird specifics of individual businesses without access to internal data showing actual customer behavior and edge cases. More data doesn’t help if it’s the wrong data. Smart organizations focus on enriching datasets that reflect real operations rather than just piling up information.
Governance adds yet another layer of complexity. Forty percent of organizations cite security and regulatory risks as their biggest scaling obstacles. New regulations like the EU AI Act take effect in August 2025, demanding compliance frameworks that most companies haven’t built. Leading firms embed audit controls directly into their systems, cutting delays by 60 percent.
Research measurement faces similar chaos. AI now generates hypotheses and accelerates breakthroughs in climate modeling and materials design, but tracking this acceleration requires constantly inventing new benchmarks. The tools we use to measure progress can’t keep pace with the progress itself. Organizations are also exploring tokenization to enable fractional ownership and new funding models for AI projects.




