Around the World of AI with Panoply's Head of R&D

Last month, VentureBeat published an article from our own Head of R&D, Dima Dobrinsky. In this piece, Dima discusses the AI tool boom and how businesses can accurately measure and improve success with now-ubiquitous AI tools like ChatGPT and Bard. 

Due to the prevalence of AI plus Dima’s expertise and thought leadership, the VentureBeat article has been featured in 17 online publications and three languages (well, two plus the original English)! Our users and followers in French, and German can now get Dima’s insights on leveraging AI successfully. 

 

Here are a few takeaways from the article :

  1. How do we distinguish AI from traditional software? By its non-deterministic nature; different rounds of computing produce different results, even with the same input. This feature makes AI exciting, but can pose challenges, especially when we try to measure the effectiveness of AI apps.

  2. What challenges do we see when measuring success? AI apps rely on statistical models and employ algorithms programmed for uncertainty - this makes it much more difficult to measure efficacy compared to traditional software. AI can “think” like a human, but how do we know if what it’s thinking is right?

  3. Quality and diversity of training data - The data that AI models “learn” from needs to be high-quality and diverse to produce optimal outcomes. Evaluating that training data is key to determining the success of your AI tool

  4. How do we overcome these challenges? 

    1. Define metrics for probabilistic success. Given the natural uncertainty in results from an AI app, if you’re tasked with assessing their success you must leverage entirely new metrics tailored to capture probabilistic outcomes. Success models that might have made sense for traditional software systems are incompatible with AI tools.

    2. Robust validation and evaluation - To measure success, you’ll have to establish a strict validation and evaluation framework. This includes comprehensive testing, benchmarking against relevant sample datasets, and conducting sensitivity analyses to assess the system’s performance under varying conditions. Regularly updating and retraining models to adapt to evolving data patterns helps maintain accuracy and reliability.

    3. User-centric evaluation - AI success does not solely exist within the algorithm. The effectiveness of the outputs to those who receive them is equally important. Therefore, it’s critical to incorporate user feedback and subjective assessments when measuring the success of AI applications, especially for consumer-facing tools. Gathering insights through surveys, studies, and qualitative assessments can provide valuable information about user satisfaction, trust, and perceived utility. Balancing objective performance metrics with user-centric output evaluations will yield a more holistic view of success.

Strategize for success

Measuring the success of any AI tool or app requires a nuanced approach recognizing the probabilistic nature of its outputs. If you’re involved in developing AI in any capacity, particularly from an R&D perspective, you must acknowledge the challenges posed by this uncertainty.

By defining appropriate probabilistic metrics, performing rigorous validation and incorporating user-centric evaluations can the industry effectively navigate the thrilling, uncharted waters of artificial intelligence.




Get a free consultation with a data architect to see how to build a data warehouse in minutes.
Request Demo
Read more in:
Share this post:

Work smarter, better, and faster with monthly tips and how-tos.