Let's explore a complete example of deciding whether to watch a movie based on two factors:
- Review Score (how well-rated the movie is)
- Friend Interest (how excited your friends are to see it)
Step 1: Set Up the Problem
Let's say we have data from past movie nights:
Movie |
Review Score |
Friend Interest |
Decision |
A |
0.8 |
0.4 |
1 (Watched) |
B |
0.1 |
0.7 |
1 (Watched) |
C |
-0.5 |
-0.3 |
-1 (Skipped) |
D |
0.6 |
-0.8 |
-1 (Skipped) |
- Review Score ranging from -1 to 1
- Friend Interest ranging from -1 to 1
Step 2: Start with Zero Weights and Bias
Let's start with:
- Weight for Review Score (w₁) = 0.0
- Weight for Friend Interest (w₂) = 0.0
- Bias (b) = 0.0
- Learning rate = 1.0
Step 3: Training Iterations
First training example (Movie A):
- Calculate weighted sum: (0.0×0.8)+(0.0×0.4)+0.0=0.0
- Since 0 = 0, our prediction is 0 (Skip)
- Actual decision was 1 (Watch)
- Prediction was wrong! We need to update weights using this formula
$$w_{new} = w_{old}+\alphayx$$
- New w1=0.0+(1.0×1×0.8)=0.0+0.8=0.8
- New w2=0.0+(1.0×1×0.4)=0.0+0.4=0.4
- Calculating the bias is very similar we just alway place 1 instead of the value of x:
- New bias=0.0+(1.0×1×1)=1.0
Let's see what our decision boundary looks like so far!
- w1x+w2y+1=0
- 0.8x+0.4y+1=0
- y=−2x−2.5

Second training example (Movie B):
- Calculate weighted sum: (0.8×0.1)+(0.4×0.7)+1.0=0.08+0.28+1.0=1.36
- Since 1.36 > 0, our prediction is 1 (Watch)
- Actual decision was 1 (Watch)
- Prediction was correct, so no weight updates needed
Third training example (Movie C):
- Calculate weighted sum: (0.8×−0.5)+(0.4×−0.3)+1.0=−0.4−0.12+1.0=0.48
- Since 0.48 > 0, our prediction is 1 (Watch)
- Actual decision was 0 (Skip)
- Prediction was wrong! We need to update weights
- New w1=0.8+(1.0×−1×−0.5)=0.8+0.5=1.3
- New w2=0.4+(1.0×−1×−0.3)=0.4+0.3=0.7
- New bias=1.0+(1.0×−1×1)=0.0
Let's see what our updated decision boundary looks like so far!
- w1x+w2y+0=0
- 1.3x+0.7y+0=0
- y=−1.86x

Fourth training example (Movie D):
- Calculate weighted sum: (1.3×0.6)+(0.7×−0.8)+0.0=0.78−0.56+0.0=0.22
- Since 0.22 > 0, our prediction is 1 (Watch)
- Actual decision was 0 (Skip)
- Prediction was wrong! We need to update weights
- New w1=1.3+(1.0×−1×0.6)=1.3−0.6=0.7
- New w2=0.7+(1.0×−1×−0.8)=0.7+0.8=1.5
- New bias=0.0+(1.0×−1)=−1.0
Step 4: Continue Training
We would continue this process, going through all examples again with our updated weights (0.7, 1.5, -1.0) and repeating until predictions stabilize.
Let's check if our current weights correctly classify all examples:
- Movie A: (0.7×0.8)+(1.5×0.4)+(−1.0)=0.56+0.6−1.0=0.16>0 → Watch ✓
- Movie B: (0.7×0.1)+(1.5×0.7)+(−1.0)=0.07+1.05−1.0=0.12>0 → Watch ✓
- Movie C: (0.7×−0.5)+(1.5×−0.3)+(−1.0)=−0.35−0.45−1.0=−1.8<0 → Skip ✓
- Movie D: (0.7×0.6)+(1.5×−0.8)+(−1.0)=0.42−1.2−1.0=−1.78<0 → Skip ✓
Great! After just one pass through all examples, our perceptron has already learned to correctly classify all our movie data!
Step 5: The Final Decision Boundary
When a perceptron learns, it creates what's called a "decision boundary" - a line (or plane in higher dimensions) that separates the two categories.

With our final weights, our decision rule becomes:
- Calculate: 0.7 × (Review Score) + 1.5 × (Friend Interest) - 1.0
- If result > 0, watch the movie; otherwise, skip it
The final decision boundary is:
- 0.7x+1.5y−1=0
- y=−0.47x+0.67