#     ### Batch normalization overview

Batch normalization is a technique that stablizes the input distribution of intermediate layers in deepnet during training, this reduces the effect of information morphing and thus helps speed up training (during a few first steps). Batch-norm is claimed to reduce the need of using dropout due to its regularizing effect. As the name Normalization suggests, you simply normalize the output of the layer to zero mean and unit variance:

This requires expensive computation on $Cov[x]$ and its inverse square root $Cov[x]^{-1/2}$, so an approximation over each mini-batch during training is proposed. For a layer with input $\mathbb{B} = \{x_{1..m}\}$, besides its original parameters to learn, Batch normalization introduces two other learnable parameters: $\gamma$ and $\beta$. The forwarding proceeds as follows

#### Goal

The above equation describes how BN handles scalars input. Real implementation, however, deals with much higher dimension vectors/ matrices. As an example, an information volume flowing through convolutional layer is 4 dimensional and we only takes the normalization steps over each feature map, separately.

Let the volume of interest be $k$ dimensional and we are normalizing over the first $k-1$ dimensions, I will derive a sequence of vectorized operations on $\triangledown_yLoss$ - the gradient propagated back to this layer - to produce $\triangledown_xLoss$ and $\triangledown_{\gamma}Loss$. $\triangledown_{\beta}Loss$ will not be considered as it can be cast as a bias-adding operation and does not belong to the atomic view of batch-normalization.

#### Denotations

For ease of denotation, I denote $\triangledown_xLoss$ as $\delta x$ and consider the case of one dimensional vector: $x_{ij}$ being the jth feature of ith training example in the batch. The design matrix is then $x$ of size $n \times f$ where $n$ is n is the batch size and $f$ is number of features. Although we are limiting the analysis to only one dimensional vectors, the later code is applicable to arbitralily bigger number of dimensions.

For example, a 4 dimensional volume with shape $(n, h, w, f)$ can be considered as $(n*h*w, f)$ after reshaping. For numpy broadcasting ability, reshaping is not even necessary.

### A computational graph view of Batch Normalization

First we break the process into simpler parts Namely,

In our setting $\delta x^{BN}$ is available. Gradients flows backwards, so we first consider $\delta x^*$. Each entry $x^*_{ij}$ contribute to the loss only through $x^{BN}_{ij}$, so according to chain rule:

Next, we can do either $v$ or $\overline{x}$, $v$ is simpler since it contributes to the loss only through $x^*$ as shown in the graph (while $\overline{x}$ also contributes to the loss through $v$). Consider a single entry $v_j$, it contributes to the loss through $x^*_{ij}$ for all $i$, so according to chain rule:

Where $\odot$ denotes elemenet-wise multiplication and the power of $-3/2$ is applied element-wise. Move on to $x^2$ with $v$ being its mean, the gradient can be easily shown to be uniformly spreaded out from $v$ as follows:

We are now ready to calculate $\delta \overline{x}$, since it contributes to the loss through $x^*$ and $x^2$, its gradient consists of two parts, one coming from $x^*$ and the other from $x^2$. Let’s do the $x^2$ first, since this square is applied element-wise, there is no summing in the derivative chain:

For $x^*$, $\overline{x}_{ij}$ contributes to the loss through only $x^*_{ij}$, so there is also no summing in the chain:

There is no matrix-wide equation this time, however if we extend the definition of $\odot$ from element-wise to broadcasted mutiplication, then:

Take the sum of $\delta_{x^2}\overline{x}$ and $\delta_{x^*}\overline{x}$, we have

Now for $m$, each entry in $m_j$ contributes to the loss through the whole $jth$ colume of $\overline{x}$, so:

$x$ contributes to the loss through $m$ and $\overline{x}$, so its gradient is the sum of two parts. The part corresponds to $m$ is analogous to that of $\overline{x}^2$ and $v$ in the sense that one is the row mean of the other. Therefore we can quickly derive that part to be $\delta_{m} x = \frac{1}{n} \delta{m}$.

The other part is also simple, as $\overline{x} = x - m$, there is no interaction between $x$ and $m$, hence $\delta_{\overline{x}} x = \delta \overline{x}$. So finally $\delta x = \delta \overline{x} + \frac{1}{n} \delta m$.

### Piece the pieces together, then simplify

Remember that the goal is to derive $\delta x$, we’ll do it now using the results derived above:

Interestingly this is the action of centering $\delta \overline{x}$ around zeros, precisely what $\overline{x}$ did to $x$.

We are done here. For efficient computation, in the forward pass we will save the value of $v^{-1/2}$ and $x^*$. This will not add anything to the computation complexity of forward pass.

### The code

#### Forward pass

def forward(self, x, gamma, is_training):
if is_training:
mean = x.mean(self._fd)
var = x.var(self._fd)
self._mv_mean.apply_update(mean)
self._mv_var.apply_update(var)
else:
mean = self._mv_mean.val
var  = self._mv_var.val

self._rstd = 1. / np.sqrt(var + 1e-8)
self._normed = (x - mean) * self._rstd
self._gamma = gamma
return self._normed * gamma


#### Backward pass

def backward(self, grad):
N = np.prod(grad.shape[:-1])
g_gamma = np.multiply(grad, self._normed)
g_gamma = g_gamma.sum(self._fd)
x_ = grad - self._normed * g_gamma * 1. / N
x_ = self._rstd * self._gamma * x_
return x_ - x_.mean(self._fd), g_gamma


The code is taken from a Github repo of mine where I am building something similar to an audo-diff DAG graph. Visit if you are interested. I conclude the post here.