Home →
Techniques and Tips →
NeuralTools →
When NeuralTools Gives Unsatisfactory Test Results
15.27. When NeuralTools Gives Unsatisfactory Test Results
Applies to:
NeuralTools 6.x/7.x
NeuralTools finished training and testing the net, but I'm not happy with the test results. Is there anything I can do to get a better network, so that I'll have higher confidence in predictions?
Yes, there are several possible improvements. Choose from the suggestions below, based on your situation.
- Gather more cases. The more cases you have, assuming they're not duplicates, the better network NeuralTools can construct.
- Train longer. If you have specified a fixed amount of time for training, and NeuralTools didn't finish training within that limit, consider increasing it.
- Try Best Net Search. If you've specified a particular type of network, a different type might provide better results. The extra time for a Best Net Search may reward you with a better network. (In the training dialog, on the Net Configuration tab, change Type of Net to "Best Net Search" and re-train.)
- Exclude some variables. That may make training run faster and may provide better results. See Eliminate Variables Based on Impact Analysis?
- Change your percentage of training cases that are reserved for testing. If you have just a training data set, and you're telling NeuralTools to hold out a certain percentage of cases for testing, try a different percentage. Testing Sensitivity, in the Utilities menu, can help you make that decision.
Last edited: 2019-01-28