Wednesday, 20 June 2018
each layer in the network has to be able to perform three operations: forward propagation, backward propagation and update
One question might be raised up now: Why do we need to perform upsampling using fractionally
strided convolution? Why can’t we just use some library to do this for us? The answer is: we
need to do this because we need to define the upsampling operation as a layer in the network.
And why do we need it as a layer? Because we will have to perform training where the image and
respective Segmentation groundtruth will be given to us – and we will have to perform
training using backpropagation.
As it is known , each layer in the network has to be able to perform three operations:
forward propagation, backward propagation and update which performs updates
to the weights of the layer during training. By doing the upsampling with transposed convolution
we will have all of these operations defined and we will be able to perform training.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment