Function Index Trains on an entire dataset, for a period of time using the Cascade2 training algorithm. Does the same as cascadetrain_on_data, but reads the training data directly from a file. Clears scaling parameters. Constructs a backpropagation neural network from a configuration file, which have been saved by save. Creates a standard backpropagation neural network, which is not fully connected and which also has shortcut connections. Just like create_shortcut, but with an array of layer sizes instead of individual parameters. Creates a standard backpropagation neural network, which is not fully connected. Just like create_sparse, but with an array of layer sizes instead of individual parameters. Creates a standard fully connected backpropagation neural network. Just like create_standard, but with an array of layer sizes instead of individual parameters. Creates the training data struct from a user supplied function. Scale data in input vector after get it from ann based on previously calculated parameters. Scale data in output vector after get it from ann based on previously calculated parameters. Descale input and output data based on previously calculated parameters. Destructs the entire network. Destructs the training data. Trains on an entire dataset, for a period of time using the Cascade2 training algorithm. Does the same as fann_cascadetrain_on_data, but reads the training data directly from a file. Clears scaling parameters. Constructs a backpropagation neural network from a configuration file, which have been saved by fann_save. Creates a standard backpropagation neural network, which is not fully connected and which also has shortcut connections. Just like fann_create_shortcut, but with an array of layer sizes instead of individual parameters. Creates a standard backpropagation neural network, which is not fully connected. Just like fann_create_sparse, but with an array of layer sizes instead of individual parameters. Creates a standard fully connected backpropagation neural network. Just like fann_create_standard, but with an array of layer sizes instead of individual parameters. Creates the training data struct from a user supplied function. Scale data in input vector after get it from ann based on previously calculated parameters. Scale data in output vector after get it from ann based on previously calculated parameters. Descale input and output data based on previously calculated parameters. Destroys the entire network and properly freeing all the associated memmory. Destructs the training data and properly deallocates all of the associated data. Returns an exact copy of a struct fann_train_data. Get the activation function for neuron number neuron in layer number layer, counting the input layer as layer 0. Get the activation steepness for neuron number neuron in layer number layer, counting the input layer as layer 0. Get the number of bias in each layer in the network. The number of fail bits; means the number of output neurons which differ more than the bit fail limit (see fann_get_bit_fail_limit, fann_set_bit_fail_limit). Returns the bit fail limit used during training. The cascade activation functions array is an array of the different activation functions used by the candidates. The number of activation functions in the fann_get_cascade_activation_functions array. The cascade activation steepnesses array is an array of the different activation functions used by the candidates. The number of activation steepnesses in the fann_get_cascade_activation_functions array. The cascade candidate change fraction is a number between 0 and 1 determining how large a fraction the fann_get_MSE value should change within fann_get_cascade_candidate_stagnation_epochs during training of the candidate neurons, in order for the training not to stagnate. The candidate limit is a limit for how much the candidate neuron may be trained. The number of cascade candidate stagnation epochs determines the number of epochs training is allowed to continue without changing the MSE by a fraction of fann_get_cascade_candidate_change_fraction. The maximum candidate epochs determines the maximum number of epochs the input connections to the candidates may be trained before adding a new candidate neuron. The maximum out epochs determines the maximum number of epochs the output connections may be trained after adding a new candidate neuron. The number of candidate groups is the number of groups of identical candidates which will be used during training. The number of candidates used during training (calculated by multiplying fann_get_cascade_activation_functions_count, fann_get_cascade_activation_steepnesses_count and fann_get_cascade_num_candidate_groups). The cascade output change fraction is a number between 0 and 1 determining how large a fraction the fann_get_MSE value should change within fann_get_cascade_output_stagnation_epochs during training of the output connections, in order for the training not to stagnate. The number of cascade output stagnation epochs determines the number of epochs training is allowed to continue without changing the MSE by a fraction of fann_get_cascade_output_change_fraction. The weight multiplier is a parameter which is used to multiply the weights from the candidate neuron before adding the neuron to the neural network. Get the connections in the network. Get the connection rate used when the network was created Returns the position of the decimal point in the ann. Returns the last error number. Returns the last errstr. Get the number of neurons in each layer in the network. Get the learning momentum. Return the learning rate. Reads the mean square error from the network. returns the multiplier that fix point data is multiplied with. Get the type of neural network it was created as. Get the number of input neurons. Get the number of layers in the network Get the number of output neurons. The decay is a small negative valued number which is the factor that the weights should become smaller in each iteration during quickprop training. The mu factor is used to increase and decrease the step-size during quickprop training. The decrease factor is a value smaller than 1, which is used to decrease the step-size during RPROP training. The maximum step-size is a positive number determining how large the maximum step-size may be. The minimum step-size is a small positive number determining how small the minimum step-size may be. The increase factor is a value larger than 1, which is used to increase the step-size during RPROP training. Get the total number of connections in the entire network. Get the total number of neurons in the entire network. Returns the error function used during training. Returns the the stop function used during training. Return the training algorithm as described by fann_train_enum. Get a pointer to user defined data that was previously set with fann_set_user_data. Initialize the weights using Widrow + Nguyen’s algorithm. Returns the number of training patterns in the struct fann_train_data. Merges the data from data1 and data2 into a new struct fann_train_data. Returns the number of inputs in each of the training patterns in the struct fann_train_data. Returns the number of outputs in each of the training patterns in the struct fann_train_data. Will print the connections of the ann in a compact matrix, for easy viewing of the internals of the ann. Prints the last error to stderr. Prints all of the parameters and options of the ANN Give each connection a random weight between min_weight and max_weight Reads a file that stores training data. Resets the last error number. Resets the last error string. Resets the mean square error from the network. Will run input through the neural network, returning an array of outputs, the number of which being equal to the number of neurons in the output layer. Save the entire network to a configuration file. Saves the entire network to a configuration file. Save the training structure to a file, with the format as specified in fann_read_train_from_file Saves the training structure to a fixed point data file. Scale data in input vector before feed it to ann based on previously calculated parameters. Scales the inputs in the training data to the specified range. Scale data in output vector before feed it to ann based on previously calculated parameters. Scales the outputs in the training data to the specified range. Scale input and output data based on previously calculated parameters. Scales the inputs and outputs in the training data to the specified range. Set the activation function for neuron number neuron in layer number layer, counting the input layer as layer 0. Set the activation function for all of the hidden layers. Set the activation function for all the neurons in the layer number layer, counting the input layer as layer 0. Set the activation function for the output layer. Set the activation steepness for neuron number neuron in layer number layer, counting the input layer as layer 0. Set the steepness of the activation steepness in all of the hidden layers. Set the activation steepness all of the neurons in layer number layer, counting the input layer as layer 0. Set the steepness of the activation steepness in the output layer. Set the bit fail limit used during training. Sets the callback function for use during training. Sets the array of cascade candidate activation functions. Sets the array of cascade candidate activation steepnesses. Sets the cascade candidate change fraction. Sets the candidate limit. Sets the number of cascade candidate stagnation epochs. Sets the max candidate epochs. Sets the maximum out epochs. Sets the number of candidate groups. Sets the cascade output change fraction. Sets the number of cascade output stagnation epochs. Sets the weight multiplier. Change where errors are logged to. Calculate input scaling parameters for future use based on training data. Set the learning momentum. Set the learning rate. Calculate output scaling parameters for future use based on training data. Sets the quickprop decay factor. Sets the quickprop mu factor. The decrease factor is a value smaller than 1, which is used to decrease the step-size during RPROP training. The maximum step-size is a positive number determining how large the maximum step-size may be. The minimum step-size is a small positive number determining how small the minimum step-size may be. The increase factor used during RPROP training. Calculate input and output scaling parameters for future use based on training data. Set the error function used during training. Set the stop function used during training. Set the training algorithm. Store a pointer to user defined data. Set a connection in the network. Set connections in the network. Shuffles training data, randomizing the order. Returns an copy of a subset of the struct fann_train_data, starting at position pos and length elements forward. Test with a set of inputs, and a set of desired outputs. Test a set of training data and calculates the MSE for the training data. Train one iteration with a set of inputs, and a set of desired outputs. Train one epoch with a set of training data. Trains on an entire dataset, for a period of time. Does the same as fann_train_on_data, but reads the training data directly from a file. |