Aiphabet

The Movie Night Decision-Maker Example

Let's explore a complete example of deciding whether to watch a movie based on two factors:

  • Review Score (how well-rated the movie is)
  • Friend Interest (how excited your friends are to see it)

Step 1: Set Up the Problem

Let's say we have data from past movie nights:

Movie Review Score Friend Interest Decision
A 0.8 0.4 1 (Watched)
B 0.1 0.7 1 (Watched)
C -0.5 -0.3 -1 (Skipped)
D 0.6 -0.8 -1 (Skipped)
  • Review Score ranging from -1 to 1
  • Friend Interest ranging from -1 to 1

Step 2: Start with Zero Weights and Bias

Let's start with:

  • Weight for Review Score (w₁) = 0.0
  • Weight for Friend Interest (w₂) = 0.0
  • Bias (b) = 0.0
  • Learning rate = 1.0

Step 3: Training Iterations

First training example (Movie A):

  1. Calculate weighted sum: (0.0×0.8)+(0.0×0.4)+0.0=0.0(0.0 \times 0.8) + (0.0 \times 0.4) + 0.0 = 0.0
  2. Since 0 = 0, our prediction is 0 (Skip)
  3. Actual decision was 1 (Watch)
  4. Prediction was wrong! We need to update weights using this formula $$w_{new} = w_{old}+\alphayx$$
  • New w1=0.0+(1.0×1×0.8)=0.0+0.8=0.8w_1 = 0.0 + (1.0 \times 1 \times 0.8) = 0.0 + 0.8 = 0.8
  • New w2=0.0+(1.0×1×0.4)=0.0+0.4=0.4w_2 = 0.0 + (1.0 \times 1 \times 0.4) = 0.0 + 0.4 = 0.4
  1. Calculating the bias is very similar we just alway place 1 instead of the value of x:
  • New bias=0.0+(1.0×1×1)=1.0\text{bias} = 0.0 + (1.0 \times 1 \times 1) = 1.0

Let's see what our decision boundary looks like so far!

  1. w1x+w2y+1=0w_1 x + w_2 y + 1 = 0
  2. 0.8x+0.4y+1=00.8x + 0.4y + 1 = 0
  3. y=2x2.5y = -2x - 2.5

undefined

Second training example (Movie B):

  1. Calculate weighted sum: (0.8×0.1)+(0.4×0.7)+1.0=0.08+0.28+1.0=1.36(0.8 \times 0.1) + (0.4 \times 0.7) + 1.0 = 0.08 + 0.28 + 1.0 = 1.36
  2. Since 1.36 > 0, our prediction is 1 (Watch)
  3. Actual decision was 1 (Watch)
  4. Prediction was correct, so no weight updates needed

Third training example (Movie C):

  1. Calculate weighted sum: (0.8×0.5)+(0.4×0.3)+1.0=0.40.12+1.0=0.48(0.8 \times -0.5) + (0.4 \times -0.3) + 1.0 = -0.4 - 0.12 + 1.0 = 0.48
  2. Since 0.48 > 0, our prediction is 1 (Watch)
  3. Actual decision was 0 (Skip)
  4. Prediction was wrong! We need to update weights
  • New w1=0.8+(1.0×1×0.5)=0.8+0.5=1.3w_1 = 0.8 + (1.0 \times -1 \times -0.5) = 0.8 + 0.5 = 1.3
  • New w2=0.4+(1.0×1×0.3)=0.4+0.3=0.7w_2 = 0.4 + (1.0 \times -1 \times -0.3) = 0.4 + 0.3 = 0.7
  • New bias=1.0+(1.0×1×1)=0.0\text{bias} = 1.0 + (1.0 \times -1 \times 1) = 0.0

Let's see what our updated decision boundary looks like so far!

  1. w1x+w2y+0=0w_1 x + w_2 y + 0 = 0
  2. 1.3x+0.7y+0=01.3x + 0.7y + 0 = 0
  3. y=1.86xy = -1.86x

undefined

Fourth training example (Movie D):

  1. Calculate weighted sum: (1.3×0.6)+(0.7×0.8)+0.0=0.780.56+0.0=0.22(1.3 \times 0.6) + (0.7 \times -0.8) + 0.0 = 0.78 - 0.56 + 0.0 = 0.22
  2. Since 0.22 > 0, our prediction is 1 (Watch)
  3. Actual decision was 0 (Skip)
  4. Prediction was wrong! We need to update weights
  • New w1=1.3+(1.0×1×0.6)=1.30.6=0.7w_1 = 1.3 + (1.0 \times -1 \times 0.6) = 1.3 - 0.6 = 0.7
  • New w2=0.7+(1.0×1×0.8)=0.7+0.8=1.5w_2 = 0.7 + (1.0 \times -1 \times -0.8) = 0.7 + 0.8 = 1.5
  • New bias=0.0+(1.0×1)=1.0\text{bias} = 0.0 + (1.0 \times -1) = -1.0

Step 4: Continue Training

We would continue this process, going through all examples again with our updated weights (0.7, 1.5, -1.0) and repeating until predictions stabilize.

Let's check if our current weights correctly classify all examples:

  • Movie A: (0.7×0.8)+(1.5×0.4)+(1.0)=0.56+0.61.0=0.16>0(0.7 \times 0.8) + (1.5 \times 0.4) + (-1.0) = 0.56 + 0.6 - 1.0 = 0.16 > 0 → Watch ✓
  • Movie B: (0.7×0.1)+(1.5×0.7)+(1.0)=0.07+1.051.0=0.12>0(0.7 \times 0.1)+(1.5 \times 0.7) + (-1.0) = 0.07 + 1.05 - 1.0 = 0.12 > 0 → Watch ✓
  • Movie C: (0.7×0.5)+(1.5×0.3)+(1.0)=0.350.451.0=1.8<0(0.7 \times -0.5)+(1.5 \times -0.3) + (-1.0) = -0.35 - 0.45 - 1.0 = -1.8 < 0 → Skip ✓
  • Movie D: (0.7×0.6)+(1.5×0.8)+(1.0)=0.421.21.0=1.78<0(0.7 \times 0.6) + (1.5 \times -0.8) + (-1.0) = 0.42 - 1.2 - 1.0 = -1.78 < 0 → Skip ✓

Great! After just one pass through all examples, our perceptron has already learned to correctly classify all our movie data!

Step 5: The Final Decision Boundary

When a perceptron learns, it creates what's called a "decision boundary" - a line (or plane in higher dimensions) that separates the two categories.

undefined

With our final weights, our decision rule becomes:

  • Calculate: 0.7 × (Review Score) + 1.5 × (Friend Interest) - 1.0
  • If result > 0, watch the movie; otherwise, skip it

The final decision boundary is:

  1. 0.7x+1.5y1=00.7x + 1.5y - 1 = 0
  2. y=0.47x+0.67y = -0.47x + 0.67