MATRIX EQUATIONS IN DEEP LEARNING RESOLUTION FOR M DATA HAS N PARAMETERS

Authors

  • TSHIBENGABU TSHIMANGA Yannick university of Kinshasa

DOI:

https://doi.org/10.46565/jreas.202383580-583

Keywords:

Machine learning, cost function, gradient descent, the perceptron, vectorization

Abstract

This article on the vectorization of learning equations by neural network aims to give the matrix equations on [1-3]: first on the model Z [8, 9] of the perceptron [6] which calculates the inputs X , the W weights and the bias, second on the quantization function [10] [11], called the loss function [6, 7] [8]. and finally the gradient descent algorithm for maximizing likelihood and minimizing Z errors [4, 5] that can be applied in the classification of emotions by facial recognition.

Downloads

Published

2023-08-22

Issue

Section

Articles