top of page

Artificial Intelligence, Human Bias and Sticking the Landing in Federal Sales

  • Writer: Dale Luddeke
    Dale Luddeke
  • Aug 15
  • 3 min read
ree

There is a common theme related to artificial intelligence (AI) that we hear most every day: there must be a human in the loop.  Based on what we know and what we have seen from AI to date no one will argue that it is important to have a “human in the loop”.  And, we have found that it is also important to understand the biases of the human(s) in the loop. 

 

The list below includes the seven generally accepted types of Artificial Intelligence. These are further defined and classified by how AI learns, how far AI can apply its knowledge and the AI ability to process the corpus of data and dynamically interact with its environment. We start with this as a framework.


ree

 

When it comes to the Federal sales process for developing and capturing business, we are primarily in Narrow AI (predictive, generative, agentic, conversational, decision-support) and Reactive Machine types of AI. This is the tip of the AI iceberg. We are learning. We do also have to consider we are actively responding to Federal requirements that may well be extending into the AGI, Limited Memory and perhaps Theory of Mind types of AI.


Then there are the human biases selecting and using AI. Of course, human bias definition and classification is as broad as the number of humans (to the power of the education and experiences informing bias) that are engaged in the Federal sales process. Considering AI is primarily performing an enabling function to automate repetitive tasks and improve human performance, it is important that we better understand and manage biases, so we are not unnecessarily dismissing AI capability, content and value. We need to adjust.


We spoke with an AI company - whose platform has generated over $7B in awards for its clients in the past two years.  We were told that the early adopters and most successful clients relied upon a single leader to make the selection decision. Here are examples we have encountered with human bias and definitive AI functionality and capability:


(1)   AI generated pipelines were developed for a multi-billion dollar company and a mid-tier company.  The large company utilized feedback from multiple account executives. The mid-tier company had a single executive evaluate the pipeline.  The large company responded that the output was nearly the same as the results that they spent several months identifying.  The mid-tier executive noted that the AI pipeline was similar and identified key opportunities which the market leading tool did not identify.  The large firm did not proceed. The mid-tier company advanced its bid profile.  


(2)   Similar AI generated competitive intelligence reports were presented to a team of several executives at a large company and a single executive of a second large company.  The competitive intelligence reports contained deep client, mission and opportunity insight, win themes, relevant justifications and incumbent contractor weaknesses. The first executive team only questioned whether the information was correct. The executive of the second company stated, “I am not aware of half this information, and the other information took me nine months to gather.”  The first company did not proceed. The second company did and added larger captures.


Why did one company see the value and the other did not?  Is AI in the eye of the beholder or in the bias of the beholder?  Is your company seeing the benefits of AI in the federal sales process?  Have you questioned whether your company made the right decision regarding the AI platform you chose to utilize, or not?  Was a single individual or a team of people empowered to make the decision?


AI transformation of the federal sales process is just beginning. AI enablement will evolve rapidly relative to prior technological disruptions. Historically, transformational change has been driven by a leader. Leadership with healthy skepticism and curiosity are paramount. Question everything in AI but not on bias alone. Individuals have biases and more people bring more bias to the process. Going forward, consider how biases impact decisions, to include the decisions being made on AI-enabled opportunity pursuits that will have new content for consideration. Understand, don’t ignore the bias.


You most likely will be evaluating more AI platform functions and capabilities again, to be followed by more informed decisions being made with AI-enabled content. Here are some thoughts on how to leverage knowledge and bias proactively with AI:


·         Consider what is possible, even audacious, in addition to what is in front of you.

·         Make sure isolated bad experiences do not dominate the decision process.

·         Request multiple AI-enabled options and be willing to rethink the decision criteria.

·         Make decisions that will scale AI and not depend on a single AI type or solution.

·         Challenge conventions and create new standards to measure the AI outcomes.

·         Encourage others to use AI-enablement for themselves to see what they derive.

·         Institute collaboration events, share perspectives, appreciate and address bias.

 

 
 
 

© 2024 BY CATAPULT GROWTH PARTNERS

Washington, DC

  • LinkedIn
bottom of page