FANN Wrapper for C++OverviewThe Fann Wrapper for C++ provides two classes: neural_net and training_data. To use the wrapper include doublefann.h, floatfann.h or fixedfann.h before the fann_cpp.h header file. To get started see xor_sample.cpp in the examples directory. The license is LGPL. Copyright © 2004-2006 created by freeg.nosp@m.oldbar@yaho.nosp@m.o.com. Summary | | | | | | | The FANN namespace groups the C++ wrapper definitions | | | | Error function used during training. | | Stop criteria used during training. | | | | The activation functions used for the neurons during training. | | | | | | Describes a connection between two neurons and its weight | | | | | | | | Default constructor creates an empty neural net. | | Copy constructor constructs a copy of the training data. | | Provides automatic cleanup of data. | | Destructs the training data. | | Reads a file that stores training data. | | | | Saves the training structure to a fixed point data file. | | Shuffles training data, randomizing the order. | | | | | | Returns the number of inputs in each of the training patterns in the training_data. | | | | A pointer to the array of input training data | | A pointer to the array of output training data | | Set the training data to the input and output data provided. | | Creates the training data struct from a user supplied function. | | Scales the inputs in the training data to the specified range. | | Scales the outputs in the training data to the specified range. | | Scales the inputs and outputs in the training data to the specified range. | | Changes the training data to a subset, starting at position pos and length elements forward. | | Encapsulation of a neural network struct fann and associated C API functions. | | | | Default constructor creates an empty neural net. | | Provides automatic cleanup of data. | | Destructs the entire network. | | Creates a standard fully connected backpropagation neural network. | | Just like create_standard, but with an array of layer sizes instead of individual parameters. | | Creates a standard backpropagation neural network, which is not fully connected. | | Just like create_sparse, but with an array of layer sizes instead of individual parameters. | | Creates a standard backpropagation neural network, which is not fully connected and which also has shortcut connections. | | Just like create_shortcut, but with an array of layer sizes instead of individual parameters. | | Will run input through the neural network, returning an array of outputs, the number of which being equal to the number of neurons in the output layer. | | Give each connection a random weight between min_weight and max_weight | | Initialize the weights using Widrow + Nguyen’s algorithm. | | Will print the connections of the ann in a compact matrix, for easy viewing of the internals of the ann. | | Constructs a backpropagation neural network from a configuration file, which have been saved by save. | | Save the entire network to a configuration file. | | Saves the entire network to a configuration file. | | Train one iteration with a set of inputs, and a set of desired outputs. | | Train one epoch with a set of training data. | | Trains on an entire dataset, for a period of time. | | Does the same as train_on_data, but reads the training data directly from a file. | | Test with a set of inputs, and a set of desired outputs. | | Test a set of training data and calculates the MSE for the training data. | | Reads the mean square error from the network. | | Resets the mean square error from the network. | | Sets the callback function for use during training. | | Prints all of the parameters and options of the neural network | | | | Set the training algorithm. | | Return the learning rate. | | | | Get the activation function for neuron number neuron in layer number layer, counting the input layer as layer 0. | | Set the activation function for neuron number neuron in layer number layer, counting the input layer as layer 0. | | Set the activation function for all the neurons in the layer number layer, counting the input layer as layer 0. | | Set the activation function for all of the hidden layers. | | Set the activation function for the output layer. | | Get the activation steepness for neuron number neuron in layer number layer, counting the input layer as layer 0. | | Set the activation steepness for neuron number neuron in layer number layer, counting the input layer as layer 0. | | Set the activation steepness all of the neurons in layer number layer, counting the input layer as layer 0. | | Set the steepness of the activation steepness in all of the hidden layers. | | Set the steepness of the activation steepness in the output layer. | | Returns the error function used during training. | | Set the error function used during training. | | The decay is a small negative valued number which is the factor that the weights should become smaller in each iteration during quickprop training. | | Sets the quickprop decay factor. | | The mu factor is used to increase and decrease the step-size during quickprop training. | | Sets the quickprop mu factor. | | The increase factor is a value larger than 1, which is used to increase the step-size during RPROP training. | | The increase factor used during RPROP training. | | The decrease factor is a value smaller than 1, which is used to decrease the step-size during RPROP training. | | The decrease factor is a value smaller than 1, which is used to decrease the step-size during RPROP training. | | The minimum step-size is a small positive number determining how small the minimum step-size may be. | | The minimum step-size is a small positive number determining how small the minimum step-size may be. | | The maximum step-size is a positive number determining how large the maximum step-size may be. | | The maximum step-size is a positive number determining how large the maximum step-size may be. | | Get the number of input neurons. | | Get the number of output neurons. | | Get the total number of neurons in the entire network. | | Get the total number of connections in the entire network. | | Returns the position of the decimal point in the ann. | | Returns the multiplier that fix point data is multiplied with. | | Get the type of neural network it was created as. | | Get the connection rate used when the network was created | | Get the number of layers in the network | | Get the number of neurons in each layer in the network. | | Get the number of bias in each layer in the network. | | Get the connections in the network. | | Set connections in the network. | | Set a connection in the network. | | Get the learning momentum. | | Set the learning momentum. | | Returns the the stop function used during training. | | Set the stop function used during training. | | Returns the bit fail limit used during training. | | Set the bit fail limit used during training. | | | | Trains on an entire dataset, for a period of time using the Cascade2 training algorithm. | | | | The cascade output change fraction is a number between 0 and 1 determining how large a fraction the get_MSE value should change within get_cascade_output_stagnation_epochs during training of the output connections, in order for the training not to stagnate. | | Sets the cascade output change fraction. | | The number of cascade output stagnation epochs determines the number of epochs training is allowed to continue without changing the MSE by a fraction of get_cascade_output_change_fraction. | | Sets the number of cascade output stagnation epochs. | | The cascade candidate change fraction is a number between 0 and 1 determining how large a fraction the get_MSE value should change within get_cascade_candidate_stagnation_epochs during training of the candidate neurons, in order for the training not to stagnate. | | Sets the cascade candidate change fraction. | | The number of cascade candidate stagnation epochs determines the number of epochs training is allowed to continue without changing the MSE by a fraction of get_cascade_candidate_change_fraction. | | Sets the number of cascade candidate stagnation epochs. | | The weight multiplier is a parameter which is used to multiply the weights from the candidate neuron before adding the neuron to the neural network. | | Sets the weight multiplier. | | The candidate limit is a limit for how much the candidate neuron may be trained. | | Sets the candidate limit. | | The maximum out epochs determines the maximum number of epochs the output connections may be trained after adding a new candidate neuron. | | Sets the maximum out epochs. | | The maximum candidate epochs determines the maximum number of epochs the input connections to the candidates may be trained before adding a new candidate neuron. | | Sets the max candidate epochs. | | | | | | The cascade activation functions array is an array of the different activation functions used by the candidates. | | Sets the array of cascade candidate activation functions. | | | | The cascade activation steepnesses array is an array of the different activation functions used by the candidates. | | Sets the array of cascade candidate activation steepnesses. | | The number of candidate groups is the number of groups of identical candidates which will be used during training. | | Sets the number of candidate groups. | | Scale input and output data based on previously calculated parameters. | | Descale input and output data based on previously calculated parameters. | | Calculate scaling parameters for future use based on training data. | | Calculate scaling parameters for future use based on training data. | | Calculate scaling parameters for future use based on training data. | | Clears scaling parameters. | | Scale data in input vector before feed it to ann based on previously calculated parameters. | | Scale data in output vector before feed it to ann based on previously calculated parameters. | | Scale data in input vector after get it from ann based on previously calculated parameters. | | Scale data in output vector after get it from ann based on previously calculated parameters. | | Change where errors are logged to. | | Returns the last error number. | | Resets the last error number. | | Resets the last error string. | | | | Prints the last error to stderr. |
Notes and differences from C API- The Fann Wrapper for C++ is a minimal wrapper without use of templates or exception handling for efficient use in any environment. Benefits include stricter type checking, simpler memory management and possibly code completion in program editor.
- Method names are the same as the function names in the C API except the fann_ prefix has been removed. Enums in the namespace are similarly defined without the FANN_ prefix.
- The arguments to the methods are the same as the C API except that the struct fann *ann/struct fann_train_data *data arguments are encapsulated so they are not present in the method signatures or are translated into class references.
- The various create methods return a boolean set to true to indicate that the neural network was created, false otherwise. The same goes for the read_train_from_file method.
- The neural network and training data is automatically cleaned up in the destructors and create/read methods.
- To make the destructors virtual define USE_VIRTUAL_DESTRUCTOR before including the header file.
- Additional methods are available on the training_data class to give access to the underlying training data. They are get_input, get_output and set_train_data. Finally fann_duplicate_train_data has been replaced by a copy constructor.
ChangesVersion 2.1.0- General update to fann C library 2.1.0 with support for new functionality
- Due to changes in the C API the C++ API is not fully backward compatible: The create methods have changed names and parameters. The training callback function has different parameters and a set_callback. Some training_data methods have updated names. Get activation function and steepness is available for neurons, not layers.
- Extensions are now part of fann so there is no fann_extensions.h
Version 1.2.0- Changed char pointers to const std::string references
- Added const_casts where the C API required it
- Initialized enums from the C enums instead of numeric constants
- Added a method set_train_data that copies and allocates training
- data in a way that is compatible with the way the C API deallocates
- the data thus making it possible to change training data.
- The get_rprop_increase_factor method did not return its value
Version 1.0.0
FANNThe FANN namespace groups the C++ wrapper definitions Summary | | | Error function used during training. | | Stop criteria used during training. | | | | The activation functions used for the neurons during training. | | | | | | Describes a connection between two neurons and its weight | | |
error_function_enumError function used during training. ERRORFUNC_LINEAR | Standard linear error function. | ERRORFUNC_TANH | Tanh error function, usually better but can require a lower learning rate. This error function agressively targets outputs that differ much from the desired, while not targetting outputs that only differ a little that much. This activation function is not recommended for cascade training and incremental training. |
See alsoneural_net::set_train_error_function, neural_net::get_train_error_function
training_algorithm_enumThe Training algorithms used when training on training_data with functions like neural_net::train_on_data or neural_net::train_on_file. The incremental training looks alters the weights after each time it is presented an input pattern, while batch only alters the weights once after it has been presented to all the patterns. TRAIN_INCREMENTAL | Standard backpropagation algorithm, where the weights are updated after each training pattern. This means that the weights are updated many times during a single epoch. For this reason some problems, will train very fast with this algorithm, while other more advanced problems will not train very well. | TRAIN_BATCH | Standard backpropagation algorithm, where the weights are updated after calculating the mean square error for the whole training set. This means that the weights are only updated once during a epoch. For this reason some problems, will train slower with this algorithm. But since the mean square error is calculated more correctly than in incremental training, some problems will reach a better solutions with this algorithm. | TRAIN_RPROP | A more advanced batch training algorithm which achieves good results for many problems. The RPROP training algorithm is adaptive, and does therefore not use the learning_rate. Some other parameters can however be set to change the way the RPROP algorithm works, but it is only recommended for users with insight in how the RPROP training algorithm works. The RPROP training algorithm is described by [Riedmiller and Braun, 1993], but the actual learning algorithm used here is the iRPROP- training algorithm which is described by [Igel and Husken, 2000] which is an variety of the standard RPROP training algorithm. | TRAIN_QUICKPROP | A more advanced batch training algorithm which achieves good results for many problems. The quickprop training algorithm uses the learning_rate parameter along with other more advanced parameters, but it is only recommended to change these advanced parameters, for users with insight in how the quickprop training algorithm works. The quickprop training algorithm is described by [Fahlman, 1988]. |
See alsoneural_net::set_training_algorithm, neural_net::get_training_algorithm
activation_function_enumThe activation functions used for the neurons during training. The activation functions can either be defined for a group of neurons by neural_net::set_activation_function_hidden and neural_net::set_activation_function_output or it can be defined for a single neuron by neural_net::set_activation_function. The steepness of an activation function is defined in the same way by neural_net::set_activation_steepness_hidden, neural_net::set_activation_steepness_output and neural_net::set_activation_steepness. The functions are described with functions where- x is the input to the activation function,
- y is the output,
- s is the steepness and
- d is the derivation.
FANN_LINEAR | Linear activation function. |
- span: -inf < y < inf
- y = x*s, d = 1*s
- Can NOT be used in fixed point.
FANN_THRESHOLD | Threshold activation function. |
- x < 0 -> y = 0, x >= 0 -> y = 1
- Can NOT be used during training.
FANN_THRESHOLD_SYMMETRIC | Threshold activation function. |
- x < 0 -> y = 0, x >= 0 -> y = 1
- Can NOT be used during training.
FANN_SIGMOID | Sigmoid activation function. |
- One of the most used activation functions.
- span: 0 < y < 1
- y = 1/(1 + exp(-2*s*x))
- d = 2*s*y*(1 - y)
FANN_SIGMOID_STEPWISE | Stepwise linear approximation to sigmoid. |
- Faster than sigmoid but a bit less precise.
FANN_SIGMOID_SYMMETRIC | Symmetric sigmoid activation function, aka. tanh. |
- One of the most used activation functions.
- span: -1 < y < 1
- y = tanh(s*x) = 2/(1 + exp(-2*s*x)) - 1
- d = s*(1-(y*y))
FANN_SIGMOID_SYMMETRIC | Stepwise linear approximation to symmetric sigmoid. |
- Faster than symmetric sigmoid but a bit less precise.
FANN_GAUSSIAN | Gaussian activation function. |
- 0 when x = -inf, 1 when x = 0 and 0 when x = inf
- span: 0 < y < 1
- y = exp(-x*s*x*s)
- d = -2*x*s*y*s
FANN_GAUSSIAN_SYMMETRIC | Symmetric gaussian activation function. |
- -1 when x = -inf, 1 when x = 0 and 0 when x = inf
- span: -1 < y < 1
- y = exp(-x*s*x*s)*2-1
- d = -2*x*s*(y+1)*s
FANN_ELLIOT | Fast (sigmoid like) activation function defined by David Elliott |
- span: 0 < y < 1
- y = ((x*s) / 2) / (1 + |x*s|) + 0.5
- d = s*1/(2*(1+|x*s|)*(1+|x*s|))
FANN_ELLIOT_SYMMETRIC | Fast (symmetric sigmoid like) activation function defined by David Elliott |
- span: -1 < y < 1
- y = (x*s) / (1 + |x*s|)
- d = s*1/((1+|x*s|)*(1+|x*s|))
FANN_LINEAR_PIECE | Bounded linear activation function. |
- span: 0 < y < 1
- y = x*s, d = 1*s
FANN_LINEAR_PIECE_SYMMETRIC | Bounded Linear activation function. |
- span: -1 < y < 1
- y = x*s, d = 1*s
FANN_SIN_SYMMETRIC | Periodical sinus activation function. |
- span: -1 <= y <= 1
- y = sin(x*s)
- d = s*cos(x*s)
FANN_COS_SYMMETRIC | Periodical cosinus activation function. |
- span: -1 <= y <= 1
- y = cos(x*s)
- d = s*-sin(x*s)
See alsoneural_net::set_activation_function_hidden, neural_net::set_activation_function_output
connectionDescribes a connection between two neurons and its weight from_neuron | Unique number used to identify source neuron | to_neuron | Unique number used to identify destination neuron | weight | The numerical value of the weight |
See Alsoneural_net::get_connection_array, neural_net::set_weight_array This structure appears in FANN >= 2.1.0
callback_typeThis callback function can be called during training when using neural_net::train_on_data, neural_net::train_on_file or neural_net::cascadetrain_on_data. typedef int (*callback_type) (neural_net &net, training_data &train, unsigned int max_epochs, unsigned int epochs_between_reports, float desired_error, unsigned int epochs, void *user_data);
The callback can be set by using neural_net::set_callback and is very usefull for doing custom things during training. It is recommended to use this function when implementing custom training procedures, or when visualizing the training in a GUI etc. The parameters which the callback function takes is the parameters given to the neural_net::train_on_data, plus an epochs parameter which tells how many epochs the training have taken so far. The callback function should return an integer, if the callback function returns -1, the training will terminate. Example of a callback function that prints information to coutint print_callback(FANN::neural_net &net, FANN::training_data &train, unsigned int max_epochs, unsigned int epochs_between_reports, float desired_error, unsigned int epochs, void *user_data) { cout << "Epochs " << setw(8) << epochs << ". " << "Current Error: " << left << net.get_MSE() << right << endl; return 0; }
See alsoneural_net::set_callback, fann_callback_type
training_dataEncapsulation of a training data set struct fann_train_data and associated C API functions. Summary | | | Default constructor creates an empty neural net. | | Copy constructor constructs a copy of the training data. | | Provides automatic cleanup of data. | | Destructs the training data. | | Reads a file that stores training data. | | | | Saves the training structure to a fixed point data file. | | Shuffles training data, randomizing the order. | | | | | | Returns the number of inputs in each of the training patterns in the training_data. | | | | A pointer to the array of input training data | | A pointer to the array of output training data | | Set the training data to the input and output data provided. | | Creates the training data struct from a user supplied function. | | Scales the inputs in the training data to the specified range. | | Scales the outputs in the training data to the specified range. | | Scales the inputs and outputs in the training data to the specified range. | | Changes the training data to a subset, starting at position pos and length elements forward. |
training_datatraining_data( | const | training_data | & | data | ) |
|
Copy constructor constructs a copy of the training data. Corresponds to the C API fann_duplicate_train_data function.
~training_data#ifdef USE_VIRTUAL_DESTRUCTOR virtual #endif ~training_data() |
Provides automatic cleanup of data. Define USE_VIRTUAL_DESTRUCTOR if you need the destructor to be virtual. See alsodestroy
destroyDestructs the training data. Called automatically by the destructor. See also~training_data
read_train_from_filebool read_train_from_file( | const std:: | string | & | filename | ) |
|
Reads a file that stores training data. The file must be formatted likenum_train_data num_input num_output inputdata seperated by space outputdata seperated by space
. . .
inputdata seperated by space outputdata seperated by space
See alsoneural_net::train_on_data, save_train, fann_read_train_from_file This function appears in FANN >= 1.0.0
save_train_to_fixedbool save_train_to_fixed( | const std:: | string | & | filename, | | unsigned | int | | decimal_point | ) |
|
Saves the training structure to a fixed point data file. This function is very usefull for testing the quality of a fixed point network. ReturnThe function returns true on success and false on failure. See alsosave_train, fann_save_train_to_fixed This function appears in FANN >= 1.0.0.
shuffle_train_datavoid shuffle_train_data() |
Shuffles training data, randomizing the order. This is recommended for incremental training, while it have no influence during batch training. This function appears in FANN >= 1.1.0.
merge_train_datavoid merge_train_data( | const | training_data | & | data | ) |
|
Merges the data into the data contained in the training_data. This function appears in FANN >= 1.1.0.
set_train_datavoid set_train_data( | unsigned | int | | num_data, | | unsigned | int | | num_input, | | | fann_type | ** | input, | | unsigned | int | | num_output, | | | fann_type | ** | output | ) |
|
Set the training data to the input and output data provided. A copy of the data is made so there are no restrictions on the allocation of the input/output data and the caller is responsible for the deallocation of the data pointed to by input and output. Parametersnum_data | The number of training data | num_input | The number of inputs per training data | num_output | The number of ouputs per training data | input | The set of inputs (a pointer to an array of pointers to arrays of floating point data) | output | The set of desired outputs (a pointer to an array of pointers to arrays of floating point data) |
See alsoget_input, get_output
create_train_from_callbackvoid create_train_from_callback( | | unsigned | int | num_data, | | unsigned | int | num_input, | | unsigned | int | num_output, | | | void | (FANN_API *user_function)( unsigned int, unsigned int, unsigned int, fann_type * , fann_type * ) | ) |
|
Creates the training data struct from a user supplied function. As the training data are numerable (data 1, data 2...), the user must write a function that receives the number of the training data set (input,output) and returns the set. Parametersnum_data | The number of training data | num_input | The number of inputs per training data | num_output | The number of ouputs per training data | user_function | The user suplied function |
Parameters for the user functionnum | The number of the training data set | num_input | The number of inputs per training data | num_output | The number of ouputs per training data | input | The set of inputs | output | The set of desired outputs |
See alsotraining_data::read_train_from_file, neural_net::train_on_data, fann_create_train_from_callback This function appears in FANN >= 2.1.0
subset_train_datavoid subset_train_data( | unsigned | int | pos, | | unsigned | int | length | ) |
|
Changes the training data to a subset, starting at position pos and length elements forward. Use the copy constructor to work on a new copy of the training data. FANN::training_data full_data_set; full_data_set.read_train_from_file("somefile.train"); FANN::training_data *small_data_set = new FANN::training_data(full_data_set); small_data_set->subset_train_data(0, 2); // Only use first two // Use small_data_set ... delete small_data_set;
See alsofann_subset_train_data This function appears in FANN >= 2.0.0.
neural_netEncapsulation of a neural network struct fann and associated C API functions. Summary | | | Default constructor creates an empty neural net. | | Provides automatic cleanup of data. | | Destructs the entire network. | | Creates a standard fully connected backpropagation neural network. | | Just like create_standard, but with an array of layer sizes instead of individual parameters. | | Creates a standard backpropagation neural network, which is not fully connected. | | Just like create_sparse, but with an array of layer sizes instead of individual parameters. | | Creates a standard backpropagation neural network, which is not fully connected and which also has shortcut connections. | | Just like create_shortcut, but with an array of layer sizes instead of individual parameters. | | Will run input through the neural network, returning an array of outputs, the number of which being equal to the number of neurons in the output layer. | | Give each connection a random weight between min_weight and max_weight | | Initialize the weights using Widrow + Nguyen’s algorithm. | | Will print the connections of the ann in a compact matrix, for easy viewing of the internals of the ann. | | Constructs a backpropagation neural network from a configuration file, which have been saved by save. | | Save the entire network to a configuration file. | | Saves the entire network to a configuration file. | | Train one iteration with a set of inputs, and a set of desired outputs. | | Train one epoch with a set of training data. | | Trains on an entire dataset, for a period of time. | | Does the same as train_on_data, but reads the training data directly from a file. | | Test with a set of inputs, and a set of desired outputs. | | Test a set of training data and calculates the MSE for the training data. | | Reads the mean square error from the network. | | Resets the mean square error from the network. | | Sets the callback function for use during training. | | Prints all of the parameters and options of the neural network | | | | Set the training algorithm. | | Return the learning rate. | | | | Get the activation function for neuron number neuron in layer number layer, counting the input layer as layer 0. | | Set the activation function for neuron number neuron in layer number layer, counting the input layer as layer 0. | | Set the activation function for all the neurons in the layer number layer, counting the input layer as layer 0. | | Set the activation function for all of the hidden layers. | | Set the activation function for the output layer. | | Get the activation steepness for neuron number neuron in layer number layer, counting the input layer as layer 0. | | Set the activation steepness for neuron number neuron in layer number layer, counting the input layer as layer 0. | | Set the activation steepness all of the neurons in layer number layer, counting the input layer as layer 0. | | Set the steepness of the activation steepness in all of the hidden layers. | | Set the steepness of the activation steepness in the output layer. | | Returns the error function used during training. | | Set the error function used during training. | | The decay is a small negative valued number which is the factor that the weights should become smaller in each iteration during quickprop training. | | Sets the quickprop decay factor. | | The mu factor is used to increase and decrease the step-size during quickprop training. | | Sets the quickprop mu factor. | | The increase factor is a value larger than 1, which is used to increase the step-size during RPROP training. | | The increase factor used during RPROP training. | | The decrease factor is a value smaller than 1, which is used to decrease the step-size during RPROP training. | | The decrease factor is a value smaller than 1, which is used to decrease the step-size during RPROP training. | | The minimum step-size is a small positive number determining how small the minimum step-size may be. | | The minimum step-size is a small positive number determining how small the minimum step-size may be. | | The maximum step-size is a positive number determining how large the maximum step-size may be. | | The maximum step-size is a positive number determining how large the maximum step-size may be. | | Get the number of input neurons. | | Get the number of output neurons. | | Get the total number of neurons in the entire network. | | Get the total number of connections in the entire network. | | Returns the position of the decimal point in the ann. | | Returns the multiplier that fix point data is multiplied with. | | Get the type of neural network it was created as. | | Get the connection rate used when the network was created | | Get the number of layers in the network | | Get the number of neurons in each layer in the network. | | Get the number of bias in each layer in the network. | | Get the connections in the network. | | Set connections in the network. | | Set a connection in the network. | | Get the learning momentum. | | Set the learning momentum. | | Returns the the stop function used during training. | | Set the stop function used during training. | | Returns the bit fail limit used during training. | | Set the bit fail limit used during training. | | | | Trains on an entire dataset, for a period of time using the Cascade2 training algorithm. | | | | The cascade output change fraction is a number between 0 and 1 determining how large a fraction the get_MSE value should change within get_cascade_output_stagnation_epochs during training of the output connections, in order for the training not to stagnate. | | Sets the cascade output change fraction. | | The number of cascade output stagnation epochs determines the number of epochs training is allowed to continue without changing the MSE by a fraction of get_cascade_output_change_fraction. | | Sets the number of cascade output stagnation epochs. | | The cascade candidate change fraction is a number between 0 and 1 determining how large a fraction the get_MSE value should change within get_cascade_candidate_stagnation_epochs during training of the candidate neurons, in order for the training not to stagnate. | | Sets the cascade candidate change fraction. | | The number of cascade candidate stagnation epochs determines the number of epochs training is allowed to continue without changing the MSE by a fraction of get_cascade_candidate_change_fraction. | | Sets the number of cascade candidate stagnation epochs. | | The weight multiplier is a parameter which is used to multiply the weights from the candidate neuron before adding the neuron to the neural network. | | Sets the weight multiplier. | | The candidate limit is a limit for how much the candidate neuron may be trained. | | Sets the candidate limit. | | The maximum out epochs determines the maximum number of epochs the output connections may be trained after adding a new candidate neuron. | | Sets the maximum out epochs. | | The maximum candidate epochs determines the maximum number of epochs the input connections to the candidates may be trained before adding a new candidate neuron. | | Sets the max candidate epochs. | | | | | | The cascade activation functions array is an array of the different activation functions used by the candidates. | | Sets the array of cascade candidate activation functions. | | | | The cascade activation steepnesses array is an array of the different activation functions used by the candidates. | | Sets the array of cascade candidate activation steepnesses. | | The number of candidate groups is the number of groups of identical candidates which will be used during training. | | Sets the number of candidate groups. | | Scale input and output data based on previously calculated parameters. | | Descale input and output data based on previously calculated parameters. | | Calculate scaling parameters for future use based on training data. | | Calculate scaling parameters for future use based on training data. | | Calculate scaling parameters for future use based on training data. | | Clears scaling parameters. | | Scale data in input vector before feed it to ann based on previously calculated parameters. | | Scale data in output vector before feed it to ann based on previously calculated parameters. | | Scale data in input vector after get it from ann based on previously calculated parameters. | | Scale data in output vector after get it from ann based on previously calculated parameters. | | Change where errors are logged to. | | Returns the last error number. | | Resets the last error number. | | Resets the last error string. | | | | Prints the last error to stderr. |
~neural_net#ifdef USE_VIRTUAL_DESTRUCTOR virtual #endif ~neural_net() |
Provides automatic cleanup of data. Define USE_VIRTUAL_DESTRUCTOR if you need the destructor to be virtual. See alsodestroy
destroyDestructs the entire network. Called automatically by the destructor. See also~neural_net
create_standardbool create_standard( | unsigned | int | num_layers, | | | | ... | ) |
|
Creates a standard fully connected backpropagation neural network. There will be a bias neuron in each layer (except the output layer), and this bias neuron will be connected to all neurons in the next layer. When running the network, the bias nodes always emits 1. Parametersnum_layers | The total number of layers including the input and the output layer. | ... | Integer values determining the number of neurons in each layer starting with the input layer and ending with the output layer. |
ReturnsBoolean true if the network was created, false otherwise. Exampleconst unsigned int num_layers = 3; const unsigned int num_input = 2; const unsigned int num_hidden = 3; const unsigned int num_output = 1;
FANN::neural_net net; net.create_standard(num_layers, num_input, num_hidden, num_output);
See alsocreate_standard_array, create_sparse, create_shortcut, fann_create_standard_array This function appears in FANN >= 2.0.0.
create_sparsebool create_sparse( | | float | connection_rate, | | unsigned | int | num_layers, | | | | ... | ) |
|
Creates a standard backpropagation neural network, which is not fully connected. Parametersconnection_rate | The connection rate controls how many connections there will be in the network. If the connection rate is set to 1, the network will be fully connected, but if it is set to 0.5 only half of the connections will be set. A connection rate of 1 will yield the same result as fann_create_standard | num_layers | The total number of layers including the input and the output layer. | ... | Integer values determining the number of neurons in each layer starting with the input layer and ending with the output layer. |
ReturnsBoolean true if the network was created, false otherwise. See alsocreate_standard, create_sparse_array, create_shortcut, fann_create_sparse This function appears in FANN >= 2.0.0.
create_shortcutbool create_shortcut( | unsigned | int | num_layers, | | | | ... | ) |
|
Creates a standard backpropagation neural network, which is not fully connected and which also has shortcut connections. Shortcut connections are connections that skip layers. A fully connected network with shortcut connections, is a network where all neurons are connected to all neurons in later layers. Including direct connections from the input layer to the output layer. See create_standard for a description of the parameters. See alsocreate_standard, create_sparse, create_shortcut_array, fann_create_shortcut This function appears in FANN >= 2.0.0.
runfann_type* run( | fann_type | * | input | ) |
|
Will run input through the neural network, returning an array of outputs, the number of which being equal to the number of neurons in the output layer. See alsotest, fann_run This function appears in FANN >= 1.0.0.
randomize_weightsvoid randomize_weights( | fann_type | min_weight, | | fann_type | max_weight | ) |
|
Give each connection a random weight between min_weight and max_weight From the beginning the weights are random between -0.1 and 0.1. See alsoinit_weights, fann_randomize_weights This function appears in FANN >= 1.0.0.
init_weightsvoid init_weights( | const | training_data | & | data | ) |
|
Initialize the weights using Widrow + Nguyen’s algorithm. This function behaves similarly to fann_randomize_weights. It will use the algorithm developed by Derrick Nguyen and Bernard Widrow to set the weights in such a way as to speed up training. This technique is not always successful, and in some cases can be less efficient than a purely random initialization. The algorithm requires access to the range of the input data (ie, largest and smallest input), and therefore accepts a second argument, data, which is the training data that will be used to train the network. See alsorandomize_weights, training_data::read_train_from_file, fann_init_weights This function appears in FANN >= 1.1.0.
print_connectionsWill print the connections of the ann in a compact matrix, for easy viewing of the internals of the ann. The output from fann_print_connections on a small (2 2 1) network trained on the xor problem Layer / Neuron 012345 L 1 / N 3 BBa... L 1 / N 4 BBA... L 1 / N 5 ...... L 2 / N 6 ...BBA L 2 / N 7 ......
This network have five real neurons and two bias neurons. This gives a total of seven neurons named from 0 to 6. The connections between these neurons can be seen in the matrix. “.” is a place where there is no connection, while a character tells how strong the connection is on a scale from a-z. The two real neurons in the hidden layer (neuron 3 and 4 in layer 1) has connection from the three neurons in the previous layer as is visible in the first two lines. The output neuron (6) has connections form the three neurons in the hidden layer 3 - 5 as is visible in the fourth line. To simplify the matrix output neurons is not visible as neurons that connections can come from, and input and bias neurons are not visible as neurons that connections can go to. This function appears in FANN >= 1.2.0.
create_from_filebool create_from_file( | const std:: | string | & | configuration_file | ) |
|
Constructs a backpropagation neural network from a configuration file, which have been saved by save. See alsosave, save_to_fixed, fann_create_from_file This function appears in FANN >= 1.0.0.
savebool save( | const std:: | string | & | configuration_file | ) |
|
Save the entire network to a configuration file. The configuration file contains all information about the neural network and enables create_from_file to create an exact copy of the neural network and all of the parameters associated with the neural network. These two parameters (set_callback, set_error_log) are NOT saved to the file because they cannot safely be ported to a different location. Also temporary parameters generated during training like get_MSE is not saved. ReturnThe function returns 0 on success and -1 on failure. See alsocreate_from_file, save_to_fixed, fann_save This function appears in FANN >= 1.0.0.
save_to_fixedint save_to_fixed( | const std:: | string | & | configuration_file | ) |
|
Saves the entire network to a configuration file. But it is saved in fixed point format no matter which format it is currently in. This is usefull for training a network in floating points, and then later executing it in fixed point. The function returns the bit position of the fix point, which can be used to find out how accurate the fixed point network will be. A high value indicates high precision, and a low value indicates low precision. A negative value indicates very low precision, and a very strong possibility for overflow. (the actual fix point will be set to 0, since a negative fix point does not make sence). Generally, a fix point lower than 6 is bad, and should be avoided. The best way to avoid this, is to have less connections to each neuron, or just less neurons in each layer. The fixed point use of this network is only intended for use on machines that have no floating point processor, like an iPAQ. On normal computers the floating point version is actually faster. See alsocreate_from_file, save, fann_save_to_fixed This function appears in FANN >= 1.0.0.
trainvoid train( | fann_type | * | input, | | fann_type | * | desired_output | ) |
|
Train one iteration with a set of inputs, and a set of desired outputs. This training is always incremental training (see FANN::training_algorithm_enum), since only one pattern is presented. Parametersann | The neural network structure | input | an array of inputs. This array must be exactly fann_get_num_input long. | desired_output | an array of desired outputs. This array must be exactly fann_get_num_output long. |
See alsotrain_on_data, train_epoch, fann_train This function appears in FANN >= 1.0.0.
train_epochfloat train_epoch( | const | training_data | & | data | ) |
|
Train one epoch with a set of training data. Train one epoch with the training data stored in data. One epoch is where all of the training data is considered exactly once. This function returns the MSE error as it is calculated either before or during the actual training. This is not the actual MSE after the training epoch, but since calculating this will require to go through the entire training set once more, it is more than adequate to use this value during training. The training algorithm used by this function is chosen by the fann_set_training_algorithm function. See alsotrain_on_data, test_data, fann_train_epoch This function appears in FANN >= 1.2.0.
train_on_datavoid train_on_data( | const | training_data | & | data, | | unsigned | int | | max_epochs, | | unsigned | int | | epochs_between_reports, | | | float | | desired_error | ) |
|
Trains on an entire dataset, for a period of time. This training uses the training algorithm chosen by set_training_algorithm, and the parameters set for these training algorithms. Parametersann | The neural network | data | The data, which should be used during training | max_epochs | The maximum number of epochs the training should continue | epochs_between_reports | The number of epochs between printing a status report to stdout. A value of zero means no reports should be printed. | desired_error | The desired get_MSE or get_bit_fail, depending on which stop function is chosen by set_train_stop_function. |
Instead of printing out reports every epochs_between_reports, a callback function can be called (see set_callback). See alsotrain_on_file, train_epoch, fann_train_on_data This function appears in FANN >= 1.0.0.
train_on_filevoid train_on_file( | const std:: | string | & | filename, | | unsigned | int | | max_epochs, | | unsigned | int | | epochs_between_reports, | | | float | | desired_error | ) |
|
Does the same as train_on_data, but reads the training data directly from a file. See alsotrain_on_data, fann_train_on_file This function appears in FANN >= 1.0.0.
testfann_type * test( | fann_type | * | input, | | fann_type | * | desired_output | ) |
|
Test with a set of inputs, and a set of desired outputs. This operation updates the mean square error, but does not change the network in any way. See alsotest_data, train, fann_test This function appears in FANN >= 1.0.0.
test_datafloat test_data( | const | training_data | & | data | ) |
|
Test a set of training data and calculates the MSE for the training data. This function updates the MSE and the bit fail values. See alsotest, get_MSE, get_bit_fail, fann_test_data This function appears in FANN >= 1.2.0.
get_MSEReads the mean square error from the network. Reads the mean square error from the network. This value is calculated during training or testing, and can therefore sometimes be a bit off if the weights have been changed since the last calculation of the value. See alsotest_data, fann_get_MSE This function appears in FANN >= 1.1.0.
reset_MSEResets the mean square error from the network. This function also resets the number of bits that fail. See alsoget_MSE, get_bit_fail_limit, fann_reset_MSE This function appears in FANN >= 1.1.0
set_callbackvoid set_callback( | callback_type | | callback, | | void | * | user_data | ) |
|
Sets the callback function for use during training. The user_data is passed to the callback. It can point to arbitrary data that the callback might require and can be NULL if it is not used. See FANN::callback_type for more information about the callback function. The default callback function simply prints out some status information. This function appears in FANN >= 2.0.0.
print_parametersPrints all of the parameters and options of the neural network See alsofann_print_parameters This function appears in FANN >= 1.2.0.
set_training_algorithmvoid set_training_algorithm( | training_algorithm_enum | training_algorithm | ) |
|
Set the training algorithm. More info available in get_training_algorithm This function appears in FANN >= 1.0.0.
get_learning_ratefloat get_learning_rate() |
Return the learning rate. The learning rate is used to determine how aggressive training should be for some of the training algorithms (FANN::TRAIN_INCREMENTAL, FANN::TRAIN_BATCH, FANN::TRAIN_QUICKPROP). Do however note that it is not used in FANN::TRAIN_RPROP. The default learning rate is 0.7. See alsoset_learning_rate, set_training_algorithm, fann_get_learning_rate This function appears in FANN >= 1.0.0.
set_learning_ratevoid set_learning_rate( | float | learning_rate | ) |
|
Set the learning rate. More info available in get_learning_rate This function appears in FANN >= 1.0.0.
get_activation_steepnessfann_type get_activation_steepness( | int | layer, | | int | neuron | ) |
|
Get the activation steepness for neuron number neuron in layer number layer, counting the input layer as layer 0. It is not possible to get activation steepness for the neurons in the input layer. The steepness of an activation function says something about how fast the activation function goes from the minimum to the maximum. A high value for the activation function will also give a more agressive training. When training neural networks where the output values should be at the extremes (usually 0 and 1, depending on the activation function), a steep activation function can be used (e.g. 1.0). The default activation steepness is 0.5. ReturnsThe activation steepness for the neuron or -1 if the neuron is not defined in the neural network. See alsoset_activation_steepness_layer, set_activation_steepness_hidden, set_activation_steepness_output, set_activation_function, set_activation_steepness, fann_get_activation_steepness This function appears in FANN >= 2.1.0
set_activation_steepnessvoid set_activation_steepness( | fann_type | steepness, | | int | layer, | | int | neuron | ) |
|
Set the activation steepness for neuron number neuron in layer number layer, counting the input layer as layer 0. It is not possible to set activation steepness for the neurons in the input layer. The steepness of an activation function says something about how fast the activation function goes from the minimum to the maximum. A high value for the activation function will also give a more agressive training. When training neural networks where the output values should be at the extremes (usually 0 and 1, depending on the activation function), a steep activation function can be used (e.g. 1.0). The default activation steepness is 0.5. See alsoset_activation_steepness_layer, set_activation_steepness_hidden, set_activation_steepness_output, set_activation_function, get_activation_steepness, fann_set_activation_steepness This function appears in FANN >= 2.0.0.
get_quickprop_decayfloat get_quickprop_decay() |
The decay is a small negative valued number which is the factor that the weights should become smaller in each iteration during quickprop training. This is used to make sure that the weights do not become too high during training. The default decay is -0.0001. See alsoset_quickprop_decay, fann_get_quickprop_decay This function appears in FANN >= 1.2.0.
get_quickprop_muThe mu factor is used to increase and decrease the step-size during quickprop training. The mu factor should always be above 1, since it would otherwise decrease the step-size when it was suppose to increase it. The default mu factor is 1.75. See alsoset_quickprop_mu, fann_get_quickprop_mu This function appears in FANN >= 1.2.0.
get_rprop_increase_factorfloat get_rprop_increase_factor() |
The increase factor is a value larger than 1, which is used to increase the step-size during RPROP training. The default increase factor is 1.2. See alsoset_rprop_increase_factor, fann_get_rprop_increase_factor This function appears in FANN >= 1.2.0.
get_rprop_decrease_factorfloat get_rprop_decrease_factor() |
The decrease factor is a value smaller than 1, which is used to decrease the step-size during RPROP training. The default decrease factor is 0.5. See alsoset_rprop_decrease_factor, fann_get_rprop_decrease_factor This function appears in FANN >= 1.2.0.
set_rprop_decrease_factorvoid set_rprop_decrease_factor( | float | rprop_decrease_factor | ) |
|
The decrease factor is a value smaller than 1, which is used to decrease the step-size during RPROP training. See alsoget_rprop_decrease_factor, fann_set_rprop_decrease_factor This function appears in FANN >= 1.2.0.
get_rprop_delta_minfloat get_rprop_delta_min() |
The minimum step-size is a small positive number determining how small the minimum step-size may be. The default value delta min is 0.0. See alsoset_rprop_delta_min, fann_get_rprop_delta_min This function appears in FANN >= 1.2.0.
set_rprop_delta_minvoid set_rprop_delta_min( | float | rprop_delta_min | ) |
|
The minimum step-size is a small positive number determining how small the minimum step-size may be. See alsoget_rprop_delta_min, fann_set_rprop_delta_min This function appears in FANN >= 1.2.0.
get_num_inputunsigned int get_num_input() |
Get the number of input neurons. This function appears in FANN >= 1.0.0.
get_num_outputunsigned int get_num_output() |
Get the number of output neurons. This function appears in FANN >= 1.0.0.
get_total_neuronsunsigned int get_total_neurons() |
Get the total number of neurons in the entire network. This number does also include the bias neurons, so a 2-4-2 network has 2+4+2 +2(bias) = 10 neurons. This function appears in FANN >= 1.0.0.
get_total_connectionsunsigned int get_total_connections() |
Get the total number of connections in the entire network. This function appears in FANN >= 1.0.0.
get_multiplierunsigned int get_multiplier() |
Returns the multiplier that fix point data is multiplied with. This function is only available when the ANN is in fixed point mode. The multiplier is the used to convert between floating point and fixed point notation. A floating point number is multiplied with the multiplier in order to get the fixed point number and visa versa. The multiplier is described in greater detail in the tutorial <Fixed Point Usage>. See also<Fixed Point Usage>, get_decimal_point, save_to_fixed, training_data::save_train_to_fixed, fann_get_multiplier This function appears in FANN >= 1.0.0.
get_network_typenetwork_type_enum get_network_type() |
Get the type of neural network it was created as. ReturnsThe neural network type from enum FANN::network_type_enum See Alsofann_get_network_type This function appears in FANN >= 2.1.0
get_connection_ratefloat get_connection_rate() |
Get the connection rate used when the network was created ReturnsThe connection rate See alsofann_get_connection_rate This function appears in FANN >= 2.1.0
get_num_layersunsigned int get_num_layers() |
Get the number of layers in the network ReturnsThe number of layers in the neural network See alsofann_get_num_layers This function appears in FANN >= 2.1.0
get_layer_arrayvoid get_layer_array( | unsigned | int | * | layers | ) |
|
Get the number of neurons in each layer in the network. Bias is not included so the layers match the create methods. The layers array must be preallocated to at least sizeof(unsigned int) * get_num_layers() long. See alsofann_get_layer_array This function appears in FANN >= 2.1.0
get_bias_arrayvoid get_bias_array( | unsigned | int | * | bias | ) |
|
Get the number of bias in each layer in the network. The bias array must be preallocated to at least sizeof(unsigned int) * get_num_layers() long. See alsofann_get_bias_array This function appears in FANN >= 2.1.0
get_connection_arrayvoid get_connection_array( | connection | * | connections | ) |
|
Get the connections in the network. The connections array must be preallocated to at least sizeof(struct fann_connection) * get_total_connections() long. See alsofann_get_connection_array This function appears in FANN >= 2.1.0
set_weight_arrayvoid set_weight_array( | | connection | * | connections, | | unsigned | int | | num_connections | ) |
|
Set connections in the network. Only the weights can be changed, connections and weights are ignored if they do not already exist in the network. The array must have sizeof(struct fann_connection) * num_connections size. See alsofann_set_weight_array This function appears in FANN >= 2.1.0
set_weightvoid set_weight( | unsigned | int | from_neuron, | | unsigned | int | to_neuron, | | | fann_type | weight | ) |
|
Set a connection in the network. Only the weights can be changed. The connection/weight is ignored if it does not already exist in the network. See alsofann_set_weight This function appears in FANN >= 2.1.0
get_learning_momentumfloat get_learning_momentum() |
Get the learning momentum. The learning momentum can be used to speed up FANN::TRAIN_INCREMENTAL training. A too high momentum will however not benefit training. Setting momentum to 0 will be the same as not using the momentum parameter. The recommended value of this parameter is between 0.0 and 1.0. The default momentum is 0. See alsoset_learning_momentum, set_training_algorithm This function appears in FANN >= 2.0.0.
set_learning_momentumvoid set_learning_momentum( | float | learning_momentum | ) |
|
Set the learning momentum. More info available in get_learning_momentum This function appears in FANN >= 2.0.0.
set_train_stop_functionvoid set_train_stop_function( | stop_function_enum | train_stop_function | ) |
|
Set the stop function used during training. The stop function is described further in FANN::stop_function_enum See alsoget_train_stop_function This function appears in FANN >= 2.0.0.
get_bit_fail_limitfann_type get_bit_fail_limit() |
Returns the bit fail limit used during training. The bit fail limit is used during training when the FANN::stop_function_enum is set to FANN_STOPFUNC_BIT. The limit is the maximum accepted difference between the desired output and the actual output during training. Each output that diverges more than this limit is counted as an error bit. This difference is divided by two when dealing with symmetric activation functions, so that symmetric and not symmetric activation functions can use the same limit. The default bit fail limit is 0.35. See alsoset_bit_fail_limit This function appears in FANN >= 2.0.0.
set_bit_fail_limitvoid set_bit_fail_limit( | fann_type | bit_fail_limit | ) |
|
Set the bit fail limit used during training. See alsoget_bit_fail_limit This function appears in FANN >= 2.0.0.
get_bit_failunsigned int get_bit_fail() |
The number of fail bits; means the number of output neurons which differ more than the bit fail limit (see get_bit_fail_limit, set_bit_fail_limit). The bits are counted in all of the training data, so this number can be higher than the number of training data. This value is reset by reset_MSE and updated by all the same functions which also updates the MSE value (e.g. test_data, train_epoch) See alsoFANN::stop_function_enum, get_MSE This function appears in FANN >= 2.0.0
cascadetrain_on_datavoid cascadetrain_on_data( | const | training_data | & | data, | | unsigned | int | | max_neurons, | | unsigned | int | | neurons_between_reports, | | | float | | desired_error | ) |
|
Trains on an entire dataset, for a period of time using the Cascade2 training algorithm. This algorithm adds neurons to the neural network while training, which means that it needs to start with an ANN without any hidden layers. The neural network should also use shortcut connections, so create_shortcut should be used to create the ANN like this: net.create_shortcut(2, train_data.num_input_train_data(), train_data.num_output_train_data());
This training uses the parameters set using the set_cascade_..., but it also uses another training algorithm as it’s internal training algorithm. This algorithm can be set to either FANN::TRAIN_RPROP or FANN::TRAIN_QUICKPROP by set_training_algorithm, and the parameters set for these training algorithms will also affect the cascade training. Parametersdata | The data, which should be used during training | max_neuron | The maximum number of neurons to be added to neural network | neurons_between_reports | The number of neurons between printing a status report to stdout. A value of zero means no reports should be printed. | desired_error | The desired fann_get_MSE or fann_get_bit_fail, depending on which stop function is chosen by fann_set_train_stop_function. |
Instead of printing out reports every neurons_between_reports, a callback function can be called (see set_callback). See alsotrain_on_data, cascadetrain_on_file, fann_cascadetrain_on_data This function appears in FANN >= 2.0.0.
get_cascade_weight_multiplierfann_type get_cascade_weight_multiplier() |
The weight multiplier is a parameter which is used to multiply the weights from the candidate neuron before adding the neuron to the neural network. This parameter is usually between 0 and 1, and is used to make the training a bit less aggressive. The default weight multiplier is 0.4 See alsoset_cascade_weight_multiplier, fann_get_cascade_weight_multiplier This function appears in FANN >= 2.0.0.
get_cascade_candidate_limitfann_type get_cascade_candidate_limit() |
The candidate limit is a limit for how much the candidate neuron may be trained. The limit is a limit on the proportion between the MSE and candidate score. Set this to a lower value to avoid overfitting and to a higher if overfitting is not a problem. The default candidate limit is 1000.0 See alsoset_cascade_candidate_limit, fann_get_cascade_candidate_limit This function appears in FANN >= 2.0.0.
get_cascade_max_out_epochsunsigned int get_cascade_max_out_epochs() |
The maximum out epochs determines the maximum number of epochs the output connections may be trained after adding a new candidate neuron. The default max out epochs is 150 See alsoset_cascade_max_out_epochs, fann_get_cascade_max_out_epochs This function appears in FANN >= 2.0.0.
get_cascade_max_cand_epochsunsigned int get_cascade_max_cand_epochs() |
The maximum candidate epochs determines the maximum number of epochs the input connections to the candidates may be trained before adding a new candidate neuron. The default max candidate epochs is 150 See alsoset_cascade_max_cand_epochs, fann_get_cascade_max_cand_epochs This function appears in FANN >= 2.0.0.
get_cascade_num_candidate_groupsunsigned int get_cascade_num_candidate_groups() |
The number of candidate groups is the number of groups of identical candidates which will be used during training. This number can be used to have more candidates without having to define new parameters for the candidates. See get_cascade_num_candidates for a description of which candidate neurons will be generated by this parameter. The default number of candidate groups is 2 See alsoset_cascade_num_candidate_groups, fann_get_cascade_num_candidate_groups This function appears in FANN >= 2.0.0.
scale_trainvoid scale_train( | training_data | & | data | ) |
|
Scale input and output data based on previously calculated parameters. See alsodescale_train, fann_scale_train This function appears in FANN >= 2.1.0.
descale_trainvoid descale_train( | training_data | & | data | ) |
|
Descale input and output data based on previously calculated parameters. See alsoscale_train, fann_descale_train This function appears in FANN >= 2.1.0.
set_input_scaling_paramsbool set_input_scaling_params( | const | training_data | & | data, | | | float | | new_input_min, | | | float | | new_input_max | ) |
|
Calculate scaling parameters for future use based on training data. See alsoset_output_scaling_params, fann_set_input_scaling_params This function appears in FANN >= 2.1.0.
set_output_scaling_paramsbool set_output_scaling_params( | const | training_data | & | data, | | | float | | new_output_min, | | | float | | new_output_max | ) |
|
Calculate scaling parameters for future use based on training data. See alsoset_input_scaling_params, fann_set_output_scaling_params This function appears in FANN >= 2.1.0.
set_scaling_paramsbool set_scaling_params( | const | training_data | & | data, | | | float | | new_input_min, | | | float | | new_input_max, | | | float | | new_output_min, | | | float | | new_output_max | ) |
|
Calculate scaling parameters for future use based on training data. See alsoclear_scaling_params, fann_set_scaling_params This function appears in FANN >= 2.1.0.
scale_inputvoid scale_input( | fann_type | * | input_vector | ) |
|
Scale data in input vector before feed it to ann based on previously calculated parameters. See alsodescale_input, scale_output, fann_scale_input This function appears in FANN >= 2.1.0.
scale_outputvoid scale_output( | fann_type | * | output_vector | ) |
|
Scale data in output vector before feed it to ann based on previously calculated parameters. See alsodescale_output, scale_input, fann_scale_output This function appears in FANN >= 2.1.0.
descale_inputvoid descale_input( | fann_type | * | input_vector | ) |
|
Scale data in input vector after get it from ann based on previously calculated parameters. See alsoscale_input, descale_output, fann_descale_input This function appears in FANN >= 2.1.0.
descale_outputvoid descale_output( | fann_type | * | output_vector | ) |
|
Scale data in output vector after get it from ann based on previously calculated parameters. See alsoscale_output, descale_input, fann_descale_output This function appears in FANN >= 2.1.0.
set_error_logvoid set_error_log( | FILE | * | log_file | ) |
|
Change where errors are logged to. If log_file is NULL, no errors will be printed. If neural_net is empty i.e. ann is NULL, the default log will be set. The default log is the log used when creating a neural_net. This default log will also be the default for all new structs that are created. The default behavior is to log them to stderr. See alsostruct fann_error, fann_set_error_log This function appears in FANN >= 1.1.0.
reset_errnoResets the last error number. This function appears in FANN >= 1.1.0.
reset_errstrResets the last error string. This function appears in FANN >= 1.1.0.
print_errorPrints the last error to stderr. This function appears in FANN >= 1.1.0.
|