Index The FANN namespace groups the C++ wrapper definitions Cascade training differs from ordinary training in the sense that it starts with an empty neural network and then adds neurons one by one, while it trains the neural network. The FANN library is designed to be very easy to use. The two main datatypes used in the fann library is struct fann, which represents an artificial neural network, and struct fann_train_data, which represent training data. Errors from the fann library are usually reported on stderr. It is possible to save an entire ann to a file with fann_save for future loading with fann_create_from_file. There are many different ways of training neural networks and the FANN library supports a number of different approaches. The Fann Wrapper for C++ provides two classes: neural_net and training_data. The activation functions used for the neurons during training. Constant array consisting of the names for the activation function, so that the name of an activation function can be received by: This callback function can be called during training when using fann_train_on_data, fann_train_on_file or fann_cascadetrain_on_data. Trains on an entire dataset, for a period of time using the Cascade2 training algorithm. Does the same as fann_cascadetrain_on_data, but reads the training data directly from a file. Clears scaling parameters. Describes a connection between two neurons and its weight Periodical cosinus activation function. Constructs a backpropagation neural network from a configuration file, which have been saved by fann_save. Creates a standard backpropagation neural network, which is not fully connected and which also has shortcut connections. Just like fann_create_shortcut, but with an array of layer sizes instead of individual parameters. Creates a standard backpropagation neural network, which is not fully connected. Just like fann_create_sparse, but with an array of layer sizes instead of individual parameters. Creates a standard fully connected backpropagation neural network. Just like fann_create_standard, but with an array of layer sizes instead of individual parameters. Creates the training data struct from a user supplied function. Scale data in input vector after get it from ann based on previously calculated parameters. Scale data in output vector after get it from ann based on previously calculated parameters. Descale input and output data based on previously calculated parameters. Destroys the entire network and properly freeing all the associated memmory. Destructs the training data and properly deallocates all of the associated data. Returns an exact copy of a struct fann_train_data. Unable to allocate memory Unable to open configuration file for reading Unable to open configuration file for writing Unable to open train data file for reading Unable to open train data file for writing Error reading info from configuration file Error reading connections from configuration file Error reading neuron info from configuration file Error reading training data from file Unable to train with the selected activation function Unable to use the selected activation function Unable to use the selected training algorithm Index is out of bound No error Scaling parameters not present Irreconcilable differences between two struct fann_train_data structures Trying to take subset which is not within the training set Wrong version of configuration file Number of connections not equal to the number expected Fast (sigmoid like) activation function defined by David Elliott Fast (symmetric sigmoid like) activation function defined by David Elliott Used to define error events on struct fann and struct fann_train_data. Error function used during training. Standard linear error function. Constant array consisting of the names for the training error functions, so that the name of an error function can be received by: Tanh error function, usually better but can require a lower learning rate. Gaussian activation function. Symmetric gaussian activation function. Get the activation function for neuron number neuron in layer number layer, counting the input layer as layer 0. Get the activation steepness for neuron number neuron in layer number layer, counting the input layer as layer 0. Get the number of bias in each layer in the network. The number of fail bits; means the number of output neurons which differ more than the bit fail limit (see fann_get_bit_fail_limit, fann_set_bit_fail_limit). Returns the bit fail limit used during training. The cascade activation functions array is an array of the different activation functions used by the candidates. The number of activation functions in the fann_get_cascade_activation_functions array. The cascade activation steepnesses array is an array of the different activation functions used by the candidates. The number of activation steepnesses in the fann_get_cascade_activation_functions array. The cascade candidate change fraction is a number between 0 and 1 determining how large a fraction the fann_get_MSE value should change within fann_get_cascade_candidate_stagnation_epochs during training of the candidate neurons, in order for the training not to stagnate. The candidate limit is a limit for how much the candidate neuron may be trained. The number of cascade candidate stagnation epochs determines the number of epochs training is allowed to continue without changing the MSE by a fraction of fann_get_cascade_candidate_change_fraction. The maximum candidate epochs determines the maximum number of epochs the input connections to the candidates may be trained before adding a new candidate neuron. The maximum out epochs determines the maximum number of epochs the output connections may be trained after adding a new candidate neuron. The number of candidate groups is the number of groups of identical candidates which will be used during training. The number of candidates used during training (calculated by multiplying fann_get_cascade_activation_functions_count, fann_get_cascade_activation_steepnesses_count and fann_get_cascade_num_candidate_groups). The cascade output change fraction is a number between 0 and 1 determining how large a fraction the fann_get_MSE value should change within fann_get_cascade_output_stagnation_epochs during training of the output connections, in order for the training not to stagnate. The number of cascade output stagnation epochs determines the number of epochs training is allowed to continue without changing the MSE by a fraction of fann_get_cascade_output_change_fraction. The weight multiplier is a parameter which is used to multiply the weights from the candidate neuron before adding the neuron to the neural network. Get the connections in the network. Get the connection rate used when the network was created Returns the position of the decimal point in the ann. Returns the last error number. Returns the last errstr. Get the number of neurons in each layer in the network. Get the learning momentum. Return the learning rate. Reads the mean square error from the network. returns the multiplier that fix point data is multiplied with. Get the type of neural network it was created as. Get the number of input neurons. Get the number of layers in the network Get the number of output neurons. The decay is a small negative valued number which is the factor that the weights should become smaller in each iteration during quickprop training. The mu factor is used to increase and decrease the step-size during quickprop training. The decrease factor is a value smaller than 1, which is used to decrease the step-size during RPROP training. The maximum step-size is a positive number determining how large the maximum step-size may be. The minimum step-size is a small positive number determining how small the minimum step-size may be. The increase factor is a value larger than 1, which is used to increase the step-size during RPROP training. Get the total number of connections in the entire network. Get the total number of neurons in the entire network. Returns the error function used during training. Returns the the stop function used during training. Return the training algorithm as described by fann_train_enum. Get a pointer to user defined data that was previously set with fann_set_user_data. Initialize the weights using Widrow + Nguyen’s algorithm. Returns the number of training patterns in the struct fann_train_data. Linear activation function. Bounded linear activation function. Bounded Linear activation function. Merges the data from data1 and data2 into a new struct fann_train_data. Each layer only has connections to the next layer Each layer has connections to all following layers Definition of network types used by fann_get_network_type Constant array consisting of the names for the network types, so that the name of an network type can be received by: Returns the number of inputs in each of the training patterns in the struct fann_train_data. Returns the number of outputs in each of the training patterns in the struct fann_train_data. Will print the connections of the ann in a compact matrix, for easy viewing of the internals of the ann. Prints the last error to stderr. Prints all of the parameters and options of the ANN Give each connection a random weight between min_weight and max_weight Reads a file that stores training data. Resets the last error number. Resets the last error string. Resets the mean square error from the network. Will run input through the neural network, returning an array of outputs, the number of which being equal to the number of neurons in the output layer. Save the entire network to a configuration file. Saves the entire network to a configuration file. Save the training structure to a file, with the format as specified in fann_read_train_from_file Saves the training structure to a fixed point data file. Scale data in input vector before feed it to ann based on previously calculated parameters. Scales the inputs in the training data to the specified range. Scale data in output vector before feed it to ann based on previously calculated parameters. Scales the outputs in the training data to the specified range. Scale input and output data based on previously calculated parameters. Scales the inputs and outputs in the training data to the specified range. Set the activation function for neuron number neuron in layer number layer, counting the input layer as layer 0. Set the activation function for all of the hidden layers. Set the activation function for all the neurons in the layer number layer, counting the input layer as layer 0. Set the activation function for the output layer. Set the activation steepness for neuron number neuron in layer number layer, counting the input layer as layer 0. Set the steepness of the activation steepness in all of the hidden layers. Set the activation steepness all of the neurons in layer number layer, counting the input layer as layer 0. Set the steepness of the activation steepness in the output layer. Set the bit fail limit used during training. Sets the callback function for use during training. Sets the array of cascade candidate activation functions. Sets the array of cascade candidate activation steepnesses. Sets the cascade candidate change fraction. Sets the candidate limit. Sets the number of cascade candidate stagnation epochs. Sets the max candidate epochs. Sets the maximum out epochs. Sets the number of candidate groups. Sets the cascade output change fraction. Sets the number of cascade output stagnation epochs. Sets the weight multiplier. Change where errors are logged to. Calculate input scaling parameters for future use based on training data. Set the learning momentum. Set the learning rate. Calculate output scaling parameters for future use based on training data. Sets the quickprop decay factor. Sets the quickprop mu factor. The decrease factor is a value smaller than 1, which is used to decrease the step-size during RPROP training. The maximum step-size is a positive number determining how large the maximum step-size may be. The minimum step-size is a small positive number determining how small the minimum step-size may be. The increase factor used during RPROP training. Calculate input and output scaling parameters for future use based on training data. Set the error function used during training. Set the stop function used during training. Set the training algorithm. Store a pointer to user defined data. Set a connection in the network. Set connections in the network. Shuffles training data, randomizing the order. Sigmoid activation function. Stepwise linear approximation to sigmoid. Symmetric sigmoid activation function, aka. Periodical sinus activation function. Stop criteria is number of bits that fail. Stop criteria used during training. Stop criteria is Mean Square Error (MSE) value. Constant array consisting of the names for the training stop functions, so that the name of a stop function can be received by: Returns an copy of a subset of the struct fann_train_data, starting at position pos and length elements forward. Test with a set of inputs, and a set of desired outputs. Test a set of training data and calculates the MSE for the training data. Threshold activation function. Threshold activation function. Train one iteration with a set of inputs, and a set of desired outputs. Standard backpropagation algorithm, where the weights are updated after calculating the mean square error for the whole training set. The Training algorithms used when training on struct fann_train_data with functions like fann_train_on_data or fann_train_on_file. Train one epoch with a set of training data. Standard backpropagation algorithm, where the weights are updated after each training pattern. Constant array consisting of the names for the training algorithms, so that the name of an training function can be received by: Trains on an entire dataset, for a period of time. Does the same as fann_train_on_data, but reads the training data directly from a file. A more advanced batch training algorithm which achieves good results for many problems. A more advanced batch training algorithm which achieves good results for many problems. fann_type is the type used for the weights, inputs and outputs of the neural network. |