Chapter 1: The Neuron
Lesson 1/5 — What is a Neuron?
Step 1/5

Playground

1.00
0.70
0.50
-0.40
-0.30
0.90
0.10

A neuron is the fundamental building block of every neural network. It takes multiple inputs, multiplies each by a weight, sums them up, adds a bias, and passes the result through an activation function to produce an output.

Think of weights as "importance knobs" — they control how much each input matters to the final decision.

Biological Inspiration: Artificial neurons are loosely inspired by biological neurons in your brain. A biological neuron receives electrical signals through dendrites, processes them in the cell body, and fires an output through its axon if the combined signal is strong enough.

Inputs: A neuron receives multiple numerical inputs. These could be raw data (like pixel values in an image) or outputs from other neurons in a previous layer. Each input is just a number.

Weights: Every input has an associated weight — a number that determines how important that input is. A large positive weight means the input strongly influences the output. A negative weight means the input pushes the output in the opposite direction.

Bias: The bias is an extra number added after all the weighted inputs are summed. Think of it as a threshold control — it shifts the activation point, making the neuron more or less likely to fire regardless of its inputs.

Weighted Sum: The neuron multiplies each input by its corresponding weight, then adds all the products together. This is a dot product: x₁×w₁ + x₂×w₂ + x₃×w₃. The result tells us the raw 'strength' of the combined signal.

Activation Function: The weighted sum plus bias is passed through an activation function (like sigmoid). This squishes the output into a specific range and introduces non-linearity — without it, stacking neurons would be pointless because linear functions of linear functions are still linear.

Output: The final output of the neuron is a single number. This output can be fed as input to neurons in the next layer, creating a chain of computations that builds increasingly complex representations.

Parameters vs Hyperparameters: Weights and biases are parameters — they are learned during training. The number of inputs, the choice of activation function, and the learning rate are hyperparameters — they are set by the designer before training begins.

Single Neuron Limitations: A single neuron can only learn linear decision boundaries — it can separate data with a straight line. It cannot learn XOR or other non-linear patterns. That's why we need networks of many neurons.

Example
Can a Single Neuron Read a Digit?
Watch a single neuron try to recognize handwritten digits — and see why one neuron isn't enough.

The Perceptron: The simplest neuron model, the perceptron (1958), used a step function as activation — output 1 if the sum exceeds a threshold, 0 otherwise. Modern neurons use smoother activation functions like sigmoid or ReLU.

output = σ(Σ(xᵢ × wᵢ) + b)

Try adjusting the sliders above to build intuition. Notice how changing a weight amplifies or dampens its corresponding input, and how the bias shifts the overall output. This is the foundation everything else builds on.