A pre-trained neural network model is a machine-learning model that uses unsupervised pre-training of the lower levels of machine complexity to overcome the risk of the system becoming stuck in a localized solution during its training period. These models are used to train machines to respond to various data.
By allowing the lower levels of a neural network to reach an equilibrium before the full process is started, the programmer minimizes the risk that the machine may stick at a certain level of correctness, believing that to be the best it can achieve. Because the pre-training creates different local conditions in each of the machine neurons, the chance of the overall system choosing a non-optimal solution is reduced, as any solution is approached from a number of directions.
Artificial neurons fire only when a certain level of activation is reached. When the preset conditions for a given neuron occur, it adds its signal to the network's overall feed. By having differing presets, multiple neuron conditions are specified and the full signal is triggered when that broader range is satisfied, rather than when only a subset is achieved. Because the system depends on inputs to trigger its outputs, it is used where those inputs have a correlational or causal relationship with the desired output.