How Feedback Loops Lead to Rapid Transformation in AI and Life

Hey!

A feedback loop is a powerful mental model that can lead to rapid transformation when it’s exploited. In this newsletter, let’s break down a fundamental concept in AI and machine learning – gradient descent. In doing so, we’ll explore how feedback loops are vital to machine learning and how they can be applied to our everyday lives.

What is Gradient Descent?

Gradient descent is an algorithm that allows ML models to “learn”. Similar to when I try to learn something new, when an ML model is first created, it’s going to make a lot of mistakes. In order to help it improve quickly, we first need a way to quantify these mistakes. Enter, the “cost function”. This function returns a value for the amount of “error” the model has and it’s essential for gradient descent.

  • For example, mean absolute error (MAE) is a cost function that takes the average difference between the model’s predictions and the correct values (ex: The model predicts 5 for a data point but the correct answer is 7. So the error is |5-7|=2. Do that for every data point and take the average.).

 

Gradient descent can leverage this cost function to tell the model how to start reducing it’s error (make less mistakes). Imagine graphing the cost function and seeing that it looks like a parabola (U-shape). Since we want to get the smallest error possible, we want to get to the bottom of the U. Consider a scenario where the model’s error is currently to the right of the bottom of the U. Gradient descent will then calculate the gradient (slope) at that point on the graph and tell the model to take a step down the slope to the left. (“Telling the model” actually means it updates the model’s parameters.) After taking that step, the model is now one step closer towards the bottom of the U and therefore it’s a little smarter. It learned!

How Is Gradient Descent a Feedback Loop?

Calculating the cost function, determining the gradient, and updating the model’s parameters is a form of feedback. Gradient descent got feedback about the model’s mistakes and then helped the model take a step in the correct direction. But why stop there? Gradient descent will repeatedly calculate the cost function and use the feedback to adjust the model’s parameters until the model stops improving from additional feedback. And that’s the loop. This allows the model to rapidly “learn” the optimal parameters.

Examples of Feedback Loops in Everyday Life

  • OODA (Observe, Orient, Decide, Act) Loop: A decision making process developed by fighter pilot John Boyd to outmaneuver the enemy. Feedback gained during Observation is then used to Orient, Decide, and Act. Whomever can iterate through the loop the fastest, wins.
  • Fail Fast: This common advice flips the word “failure” on it’s head by making it something good to strive for. The faster you fail, the faster you learn how to improve.
  • “If you are not embarrassed by the first version of your product, you’ve launched too late”: This is a quote from venture capitalist Reid Hoffman that encourages people to release products early so they can get feedback as fast as possible.
  • Deliberate Practice: This is a technique coined by Anders Ericsson in his book Peak that outlines how to develop expertise quickly. A critical component of deliberate practice is getting feedback.
  • Growth Mindset: This is a mindset that values effort over outcomes. People with a growth mindset will often outperform people with a fixed mindset because they are more willing to fail and learn from their mistakes. A person with a fixed mindset will be too worried about looking dumb so they will never try and therefore never learn.

My New Content

 

Michael Hammer

Read all my newsletters here: https://michaelphammer.com/newsletter/

Consider Sharing:

Facebook
Twitter
Pinterest
LinkedIn

More Articles You Might Enjoy