Machine Learning and Leadership

Treating humans like data, but in a good way

Photo by Chris Liverani on Unsplash

At Siamo, we firmly believe in a human-first, people-first approach. People are not resources, they can’t be told what to do like a computer, and they are more complex than technology.

That being said, developing people as a leader is a lot like developing a machine learning math model. Hear me out. 

Core Concepts of Machine Learning

This part is not technical, but it is nerdy. You can skip this section if you want to cut to the meat of how this applies to leadership, but if you’d like a quick understanding of machine learning beforehand, read on. 

For machine learning and artificial intelligence, the goal is generally the same - train code to recognize patterns and create a model to “make a decision” about what to do in a future, similar situation. 

There are many already running examples of these models in popular use:

  1. Snapchat filters - human faces are unique enough that we can use Face-ID to secure our phones, but not so unique we couldn’t train a computer to know where the mouth is, whether our tongue is sticking out, and thus know exactly where to add bunny ears and sparkles. 

  2. Checking for agricultural health - computers have been trained on hundreds of pictures of fruits, vegetables, and crops to catch diseases, or adjust watering/fertilizer schedules automatically, or determine which fruits and vegetables can be sold on a supermarket shelf versus chopped for use in food production. 

  3. Object recognition in photos - being able to determine what is in a picture without being told (think of the Hotdog, Not-Hotdog app from the show Silicon Valley)


All of these can now happen with no human intervention, creating dramatic improvements in speed of processing for a variety of use cases..

As humans, we do this kind of categorization all the time without thinking. Having seen enough birds, we can tell you whether we are seeing a bird or not, even if we’ve never seen the species. The more we train specific recognition, the more granular we can get, like an ornithologist who can immediately differentiate between similar looking finches others would miss. The same is true for machine learning models.

Awesome. What does this have to do with leadership?

As leaders, we have to help teams accomplish goals. In the corporate sense, it may be a project, a business, or any other initiative that will require collaboration and planning. As the leader, we want to help those accomplish what needs to be done on their own.

From the leadership perspective, any job is more successful - and significantly less work - when we can trust our team to autonomously solve as many problems as they can. The more we help develop our teams to understand situations for themselves, the more likely they are to make good decisions without help in the future, which frees up some valuable time for everyone and avoids a crisis of confidence when the leader is unavailable to help (like, say, wanting to take a vacation).

Like leading humans, the process of training computers (which ultimately “understand” the world through zeros and ones) to assess complex situations is, to say the least, a bit complicated. There are dozens of techniques and tools, all with various trade-offs that can be used separately or in tandem, all with the goal of creating accurate prediction models that operate without our continued instruction.

An Applied Example

One common problem in machine learning is creating a “decision boundary,” which is to say that given a set of new inputs, can a machine “decide” what to do next, whether that is classifying the inputs, or predicting what will happen next? 

For example, if you are given the height and width of many ponderosa pines and lodge-pole pines, could you accurately predict what type of tree is being measured if you are given the height and width of a new pine tree? 

The data you have may look like the figure below, with the dotted line representing a decision boundary. Inputs on one side will be classified one way, the other side will be classified another way.

Translating this to a human/business context, imagine someone on your team has discovered a new tool they would like to purchase to make their job more efficient. They want to know whether the executive team will be willing to make this purchase. 

As the leader, you have learned through experience that the executive team will want to know how much time and money will be saved so they can make an accurate assessment whether the tool is worth it. 

On an X/Y plot, we can assign X to be the price of the tool and Y to be the amount of time/money saved. As the leader we are being asked to give the over/under of whether this particular purchase will be endorsed by the executives, and we have a set of choices for how we would like to proceed. 

From your experience, you’ve seen the executive team make many of these decisions and mentally you have the decisions plotted out like so:

Screen Shot 2021-06-09 at 11.12.18 PM.png

You know that the executives have very tight belts and won’t even consider a tool until there is significant time-savings, and even then, they will have sticker shock beyond a certain price and are unlikely to move forward. 

What is the most useful way to help your team understand how they might make these decisions in the future?

The One-Off Answer

One option is to tell your team member flat out what you think the answer will be as a yes or no. Maybe you’ll base this on your predictions given your past experiences, or maybe you just know the answer will be “No” regardless because the budget is extremely tight this year. Regardless of your reasons, you don’t explain any of these factors to your teammate. 

In this situation, you have answered their question, but given them no ability to make similar decisions in the future. They may not understand this is an outlier, or may assume they know why it is a yes or no and form their own potentially very skewed prediction. 

From one data point (or even only a few data points) the potential decision boundaries are nearly infinite:

Screen Shot 2021-06-09 at 11.12.33 PM.png

In machine learning, this would be considered an “under fitting” model, which is to say it does not do a good job of capturing the available data and is very likely to not accurately predict the outcome we are looking for.

As leaders, we have either created a situation where the person must return to us for future answers with no learning or growth on their part, turning us into a bottle neck for such decisions. Or we have not provided enough guidance and the team member will make a likely-unhelpful (and potentially costly) guess in the future.

Giving Too Much Context

On the other end of the spectrum, we could choose to tell the team member every decision we have seen the executives make. If we provide them with all of our context, surely they will have the information to come up with their own model and make their own predictions in the future, right?

Without testing and explanation of context, we may be giving our teammate a prediction model that looks something like this:

Screen Shot 2021-06-09 at 11.12.51 PM.png

Now it may look like the executives almost follow a trend, but at a certain level of cost/efficiency, they will seem to accept a purchase that we can see doesn’t really fit the trend or vice versa.

In machine-learning, this is called “over-fitting,” which is to say the model we have is highly accurate at predicting the known data, but is so tuned to the nuances of that existing data, it is likely to fail to accurately predict a new scenario. The more mistakes we find, the less likely we are to trust the model and more likely we are to avoid using it entirely.

If we do this as a leader, our teams will likely find us unhelpful (or micromanaging) and avoid going to us in the future, which can quickly create resentment from both sides as well as a communication gap.

Finding the Balance

In leadership and machine learning, we will find the greatest benefits when we provide a model with a good level of “fit.” We can impart wisdom that allows our teams to understand and make good decisions in the future, knowing that the wisdom will not fit every situation and we should have grace for ourselves and our teammates when future outliers occur. 

We want to provide enough information and guiding questions to help our team become self-sufficient. 

This autonomy prevents the team from burning out due to micromanagement, and helps inspire trust in the leader and the team. 

Previous
Previous

Practicing Emotional Regulation

Next
Next

There Are No Shortcuts