Function Index Get the activation function for neuron number neuron in layer number layer, counting the input layer as layer 0. Get the activation steepness for neuron number neuron in layer number layer, counting the input layer as layer 0. Get the number of bias in each layer in the network. The number of fail bits; means the number of output neurons which differ more than the bit fail limit (see get_bit_fail_limit, set_bit_fail_limit). Returns the bit fail limit used during training. The cascade activation functions array is an array of the different activation functions used by the candidates. The number of activation functions in the get_cascade_activation_functions array. The cascade activation steepnesses array is an array of the different activation functions used by the candidates. The number of activation steepnesses in the get_cascade_activation_functions array. The cascade candidate change fraction is a number between 0 and 1 determining how large a fraction the get_MSE value should change within get_cascade_candidate_stagnation_epochs during training of the candidate neurons, in order for the training not to stagnate. The candidate limit is a limit for how much the candidate neuron may be trained. The number of cascade candidate stagnation epochs determines the number of epochs training is allowed to continue without changing the MSE by a fraction of get_cascade_candidate_change_fraction. The maximum candidate epochs determines the maximum number of epochs the input connections to the candidates may be trained before adding a new candidate neuron. The maximum out epochs determines the maximum number of epochs the output connections may be trained after adding a new candidate neuron. The number of candidate groups is the number of groups of identical candidates which will be used during training. The number of candidates used during training (calculated by multiplying get_cascade_activation_functions_count, get_cascade_activation_steepnesses_count and get_cascade_num_candidate_groups). The cascade output change fraction is a number between 0 and 1 determining how large a fraction the get_MSE value should change within get_cascade_output_stagnation_epochs during training of the output connections, in order for the training not to stagnate. The number of cascade output stagnation epochs determines the number of epochs training is allowed to continue without changing the MSE by a fraction of get_cascade_output_change_fraction. The weight multiplier is a parameter which is used to multiply the weights from the candidate neuron before adding the neuron to the neural network. Get the connections in the network. Get the connection rate used when the network was created Returns the position of the decimal point in the ann. Returns the last error number. Returns the last errstr. A pointer to the array of input training data Get the number of neurons in each layer in the network. Get the learning momentum. Return the learning rate. Reads the mean square error from the network. Returns the multiplier that fix point data is multiplied with. Get the type of neural network it was created as. Get the number of input neurons. Get the number of layers in the network Get the number of output neurons. A pointer to the array of output training data The decay is a small negative valued number which is the factor that the weights should become smaller in each iteration during quickprop training. The mu factor is used to increase and decrease the step-size during quickprop training. The decrease factor is a value smaller than 1, which is used to decrease the step-size during RPROP training. The maximum step-size is a positive number determining how large the maximum step-size may be. The minimum step-size is a small positive number determining how small the minimum step-size may be. The increase factor is a value larger than 1, which is used to increase the step-size during RPROP training. Get the total number of connections in the entire network. Get the total number of neurons in the entire network. Returns the error function used during training. Returns the the stop function used during training. Return the training algorithm as described by FANN::training_algorithm_enum. Initialize the weights using Widrow + Nguyen’s algorithm. Returns the number of training patterns in the training_data. Merges the data into the data contained in the training_data. Default constructor creates an empty neural net. Provides automatic cleanup of data. Returns the number of inputs in each of the training patterns in the training_data. Returns the number of outputs in each of the training patterns in the struct fann_train_data. Will print the connections of the ann in a compact matrix, for easy viewing of the internals of the ann. Prints the last error to stderr. Prints all of the parameters and options of the neural network Give each connection a random weight between min_weight and max_weight Reads a file that stores training data. Resets the last error number. Resets the last error string. Resets the mean square error from the network. Will run input through the neural network, returning an array of outputs, the number of which being equal to the number of neurons in the output layer. |