To AI or Not to AI: Must-Ask Questions for Life Sciences Leaders

By Nick Kovacs, PhD, Talent Consultant at Mix Talent

Over the last couple of months, the potential use cases for Artificial Intelligence (AI) have continued to expand. Tens of thousands of technology firms are working hard to develop new solutions and applications that can be used across business functions, including talent management and HR where AI is already being used in hiring, candidate targeting, and even interviewing.

With so much buzz around AI and related technologies, it’s hard not to ask questions like “Is this too good to be true?” or “Should I be worried this is going to replace my job?” I discuss these questions in my recent overview of AI in the life sciences, “The AI Balancing Act: Use Cases & Watch-Outs For Life Sciences Leaders.” 

The short answers: 

  1. Yes, many AI applications in talent management are too good to be true – they can be helpful but, as of yet, are far from “game-changing”
  2. No, AI is not in a position to replace HR and talent management jobs, just as self-driving cars are not ready to replace human drivers (and may never be)

With these hefty questions out of the way, what remains are practical questions about practical applications. To be sure, there are reasonable, safe, and innovative use cases for AI that are worth pursuing. At Mix, we are carefully testing how AI can help us improve efficiencies in our work, and we are cautiously optimistic about its ability to help us do so over time.

However, when considering AI applications, the key words here are “carefully” and “cautiously.” Weighing the pros and cons of AI systems is absolutely critical because AI implementation is simply too risky and too expensive to forgo due diligence in exchange for jumping on the bandwagon.

Here, I want to help with that careful consideration, which starts with asking the right questions that can help guide your decision-making around AI implementation. These questions fall into three categories: Return on Investment (ROI), Risk, and Perceptions. Let’s start with ROI. 

Return on Investment (ROI): Making the business case

As should always be considered when evaluating new tools and technology, the first, second, and last question should be around ROI – that is, is the totality of what is needed to implement and use the technology worth the investment and upkeep?

Here are some questions to ask to help you evaluate this:

Question: Does this AI tool provide a better ROI than alternative solutions?

Currently, non-AI alternative options may bring just as much ROI as AI at a similar or lesser cost. 

  • Consider whether the benefits of integrating the AI outweigh the costs (and unknowns associated with future costs).
  • Consider how you will measure the ROI of integrating AI. For example, how much cost is saved by needing less time to generate content if it still needs to be carefully reviewed before publishing?

Question: Do you have good data to use with the AI and is it specific enough for your needs?

For AI to provide predictability and effectiveness above and beyond non-AI interventions, the data input must be accurate and relevant to your needs. 

  • Consider whether the data you provide is accurate enough to ensure a “good” output from the AI – remember, garbage in garbage out!
  • Consider whether the AI will provide results specific to your exact needs, including tasks, industry, roles, etc.

Question: How often is the AI model updated?

AI models are only accurate to the time they were last updated. For example, the publicly-available ChatGPT 3.5 is only up to date to January 2022 and is unaware of anything since then. This means any new changes must be captured to be considered in the AI’s output.

  • Consider how frequently the AI will need to account for recent changes.
  • Consider how often large changes occur and how they might impact the bias or accuracy of the models.

Question: How much will integration and training cost?

As with any new tool or process, training and integration into the current structure will be important to evaluate.

  • Consider what training might look like for all employees using the AI, as well as the associated costs for building the training programs and employee time engaging in them.
  • Consider how you will measure the success of training, especially given all the potential risks to account for when using AI.

Risks: Avoiding bias and protecting sensitive data

Though the intricacies of how AI works are not always easily explainable or understood, the decisions they inform must be accurate and they must avoid bias across varying demographics. To ensure leaders are aware of the potential risks inherent in AI tools, we recommend asking the following questions:

Question: How does the AI system account for potential bias?

AI models are inherently open to bias without correction. In a world of compliance and risk management, understanding how bias is accounted for and reduced is important for any AI tool.

  • Consider the data that suggests biases have been accounted and adjusted for within the AI models.
  • Consider how your organization already accounts for bias in decisions and how easily these processes can apply to AI.

Question: What data is required to feed into the AI and how secure is it?

AI often requires organizational data to be relevant and applicable. Ensuring the security of this data, especially when AI is incorporated into systems with employee or candidate data, is crucial.

  • Consider what proprietary or sensitive data might be exposed to the AI that should not leave the organization.
  • When providing data to the system, consider how secure the data is, and who actually owns the data once it’s provided.

Question: How has accuracy of the model been measured and who will be accountable for model inaccuracies?

Though AI can be more accurate than humans, that doesn’t mean it always is

  • Consider what methods have been used to suggest the AI is accurate, and what output exists to test this accuracy.
  • Consider the consequences of the AI being wrong, including who would be accountable and what backups might be necessary to mitigate any risks.

Perceptions: Safeguarding the candidate and employee experience

Lastly, perceptions and reputation are always important to keep in mind when making changes – in fact, the perception of companies using AI is part of why adopting AI has become so popular! To ensure candidates’, employees’, and others’ perceptions remain positive, we suggest asking the following:

Question: How will others view the experience of working with the AI system?

Whether candidates are seeking to join the organization or employees are experiencing the impacts of the AI, consumer experience is an important factor that can impact the future of organizations.

  • If using AI for hiring or marketing, consider how candidates or consumers might respond to the AI versus a personal touch with a human on the other end.
  • If using AI for internal purposes, consider how employees might view the AI – especially in situations when they might disagree with it.

Question: How much might employees trust the AI now, and in the long run?

Though trust in other people starts low and grows stronger over time, research suggests trust in AI starts high but then decreases over time. This is especially true if mistakes occur, or if it appears AI may replace people.

  • Consider how you might build employee trust in AI, especially if inaccuracies arise.
  • Consider the potential ways to build trust in AI, and/or trust that the AI will be used in collaboration with people versus replacing people.

Question: Can you explain how the AI model works?

When making decisions informed by AI, it is important to explain the reasoning behind the decisions so you can gain buy-in from others, improve upon your processes, and give feedback.

  • Consider what insights from the AI system can be shared with others, including how the AI arrived at those insights.
  • Consider what happens when people might disagree with the AI and to what extent the model can be argued against by conflicting data.


Bringing any new tool into an organization can be time- and energy-consuming, and with AI the level of evaluation needed to safely start using it can feel daunting. However, such decisions can have a major impact on your company now and in the future and should be made thoughtfully and methodically. 

If a planful approach is taken, AI can change our future for the better. If these important questions are missed, the downsides of AI implementation may far outweigh the benefits.

As always, if you’d like to continue the conversation, reach out to me anytime on LinkedIn or at – I’d love to hear from you!

About the Author

Nicholas Kovacs, PhD

Talent Consultant

Nick is an industrial-organizational psychologist by training and a talent consultant at Mix Talent. His expertise in research includes artificial intelligence, resilience, occupational health, analytics, and selection, and his expertise in practice includes building and administering selection and development assessments, consulting on best practices regarding interviewing, compliance, and other selection-related processes, and conducting analyses using Excel, R programming, and machine learning. Nick began his career working within healthcare research at the National Institutes of Health, and he has since shifted to a career providing support to Fortune 100 and start-ups alike within the life science/biotech industry.

Scroll to Top

Let's Mix!

If you’d like to learn more about what we do, get in touch and discover how we can help you make it happen.