What is a Neural Network?
The brain-inspired architecture behind modern AI. How layers of simple math create something that feels intelligent.
5 min read
When people hear "neural network," they imagine something like a digital brain. Neurons firing. Synapses connecting. Thoughts emerging.
The reality is both simpler and stranger.
Not actually a brain
Let's get this out of the way: neural networks are not brains.
They're called "neural" because they were loosely inspired by how neurons connect in biological brains. But the similarity ends there.
A neural network is just math. Specifically: lots of numbers, multiplied together, passed through simple functions, layer after layer.
The building blocks
A neural network is made of layers. Each layer contains nodes (sometimes called neurons).
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β β β INPUT HIDDEN LAYERS OUTPUT β β LAYER (the magic) LAYER β β β β βββββββββββββββββββββββββββββ β β \ /β\ /β\ β β \ / β \ / β \ β "cat" β β βββ\βββββββ/βββββ\βββββββ/βββββ\ββββββββ/ β β \ / β \ / β \ / β β \ / β \ / β \ / β β ββββββ\ /βββββββββββ\ /βββββββββββ\ / β β X β X β / β β / \ β / \ β /\ β β βββββ/βββ\βββββββββ/βββ\ββββββββββ/ββ\βββββββ "dog" β β / \ β / \ β / \ β β / \ β / \ β / \ β β ββ/βββββββββ\βββ/βββββββββ\ββββ/ \ β β β β (image (finding (more (final β β pixels) patterns) patterns) answer) β β β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Input layer: Where data enters. For an image, each pixel's brightness becomes a number.
Hidden layers: The middle layers where the real work happens. "Hidden" because we don't directly see what they're doing.
Output layer: The answer. For a cat-vs-dog classifier, it might output two numbers: probability of cat, probability of dog.
What each node does
Every node does the same simple thing:
- Take inputs from the previous layer
- Multiply each input by a "weight" (a number)
- Add them all up
- Apply a simple function (to add non-linearity)
- Send the result to the next layer
That's it. Really.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β β β WHAT ONE "NEURON" DOES β β β β β β inputβ ββ(Γ0.7)βββ β β \ β β inputβ ββ(Γ0.2)ββββββββΊ [SUM] βββΊ [function] βββΊ output β β / β β inputβ ββ(Γ0.5)βββ β β β β β β Example: β β (0.8 Γ 0.7) + (0.3 Γ 0.2) + (0.6 Γ 0.5) = 0.92 β β Apply function β 0.71 β β β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
The magic is in the weights. A large neural network might have billions of weights. Training is the process of finding the right values for all of them.
Why layers matter
A single layer can only learn simple patterns. But stack layers together, and something interesting happens.
Layer 1 might learn to detect edges in an image. Layer 2 might combine edges into shapes. Layer 3 might recognize shapes as features (ears, eyes, noses). Layer 4 might recognize combinations of features as objects (cat face, dog face).
Each layer builds on the previous one, learning more abstract concepts.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β β β IMAGE RECOGNITION: WHAT EACH LAYER "SEES" β β β β Layer 1: β± β² β β (edges, lines) β β β β Layer 2: β’ β£ β β‘ (simple shapes) β β β β Layer 3: π π π½ (facial features) β β β β Layer 4: π± πΆ (whole faces) β β β β Each layer = more abstract understanding β β β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
This is why "deep" learning is deep. More layers = more abstraction = more capability.
How training works
Training a neural network means finding the right weights. Here's the process:
-
Forward pass: Feed an example through the network, get a prediction.
-
Calculate loss: How wrong was the prediction? This is a number called "loss."
-
Backward pass: Figure out which weights contributed most to the error. (This uses calculus, specifically "gradient descent.")
-
Update weights: Nudge each weight slightly in the direction that reduces error.
-
Repeat: Do this millions of times with millions of examples.
Slowly, the weights converge to values that make good predictions.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β β β THE TRAINING LOOP β β β β ββββββββββββββββββββββββ β β β β β β βΌ β β β [Training βββΊ [Make βββΊ [Calculate βββΊ [Adjustβ β Example] Prediction] Error] Weights] β β β Repeat millions of times β β β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
The numbers are staggering
GPT-3 has 175 billion parameters (weights). GPT-4 likely has over a trillion.
Training GPT-3 required processing 300 billion words. Training modern image models requires millions of images.
This is why AI progress required massive compute. These models need to see a lot of data and adjust a lot of weights.
What makes it work
A few key innovations made modern neural networks possible:
Activation functions add non-linearity. Without them, stacking layers would be pointless (multiple linear layers collapse into one).
Backpropagation lets us efficiently calculate which weights to adjust. Invented in the 1980s, it made deep networks trainable.
Transformers (2017) changed how layers connect for language tasks. This architecture powers ChatGPT, Claude, and basically all modern LLMs.
Scale turned out to matter a lot. Bigger models, more data, more compute = better results.
The weird part
Here's what's strange: we design the architecture, but we don't program the behavior.
The weights are found by the training process. We can look at them, but they're just billions of numbers. We can't easily understand why the network makes specific decisions.
It's a learned black box. It works. We're not entirely sure how.
Neural networks are the engine. Training is the fuel. But how do you actually talk to these models effectively? Next: What is Prompt Engineering?
Get new explanations in your inbox
Every Tuesday and Friday. No spam, just AI clarity.
Powered by AutoSend