🎁 Checkout my Learn React by Making a Game course and get 1 month Free on Skillshare!

Why do we start from random weights in TensorflowJs?

TensorflowJs comes with some built-in functions that allow you to generate random values. For example tf.randomUniform().

import * as tf from "@tensorflow/tfjs"

// Creating a tensor with 5 values sampled
// from a uniform distribution
const randTensor=tf.randomUniform([5]);

// Printing the tensor
x.print(); 

// will output something like this
// [0.0003758, 0.1491586, 0.2266536, 0.0614096, 0.1920560]

Actually, TensorflowJs has a full utility package for random values.

But why? Why would we need to generate these random values?

Well, because these random values will later become the weights between the neurons of a network.

But still ... why? Why do we need to start with random weights?

For example, why can't we just start with the weights set to zero?

Yeah ... all weights set to zero will not be an option.

The total input of a neuron is calculated as the weighted sum of all the individual inputs:

const totalInput = sumAll(weight[i] * input[i])

Given that the weights are zero then all the times the total input will be zero. Therefore our network will just stay in a constant state of zero inputs for all the neurons, and it will not learn.

Something similar will happen for any constant initialization value. Every neuron in the network will compute the same output, which results in the same weight/parameter update. Doing this will defeat the purpose of having multiple neurons.

The reason why we want random initialization values for the TensorflowJs weights is so that we can break the symmetry and determine changes.

Chance determines learning. If a neural network always keeps its state constant during all the epochs it does not changes, therefore it does not learn.

The process of training a neural network so that it makes β€œreasonable” predictions involves tweaking the weights of the neurons multiple times so that the rate of success becomes better and better.

What are good initialization values for weights in neural networks?

The weights need to have initial values. If they are not a constant value how should you choose them?

The best is to initialize the weights with small uniform random numbers. Somewhere between 0 and 1. Therefore the need for random functions in TensorflowJs such as randomUniform().

The reason why the values need to be uniform (close to each other, but now equal) is to be sure that each input has the chance to be meaningful. Given the weighted sum of the inputs, if the weight of one input is way too big or too small then that input will not matter at all or will make the other inputs not too matter.

Also when the weights are initialized with a large number, the term grows. The value is then mapped to 1 using something like a sigmoid function, resulting in a much slower change in gradient descending slope and so the performance of the learning process is greatly affected.

πŸ“– 50 Javascript, React and NextJs Projects

Learn by doing with this FREE ebook! Not sure what to build? Dive in with 50 projects with project briefs and wireframes! Choose from 8 project categories and get started right away.

πŸ“– 50 Javascript, React and NextJs Projects

Learn by doing with this FREE ebook! Not sure what to build? Dive in with 50 projects with project briefs and wireframes! Choose from 8 project categories and get started right away.


Leave a Reply

Your email address will not be published. Required fields are marked *