STATISTICA Numerical Precision
Tests of numerical precision and accuracy of statistical algorithms of the main computational engines of STATISTICA (by StatSoft, Inc.)
The following selection of 52 datasets and analysis designs included in these validation benchmarks represent:
- Standard tests of numerical accuracy for floating point mathematical operations (such as the small relative variance test, etc.);
- Published benchmark datasets developed for the purpose of testing statistics programs and used in published reviews of statistics and math packages (including all benchmark datasets proposed in “Benchmark datasets for Evaluating Microcomputer Statistical Programs,” by Elliott, Reisch, and Campbell); and
- A comprehensive selection of sample datasets for complex and demanding numerical problems (and some unusual datasets) recommended to us by leading experts in the respective areas of statistics¹ and/or published in statistics textbooks and special monographs, including representative samples of computational problems from the “Analysis of Messy Data”, by Milliken and Johnson (1984), and “Applied Linear Statistical Models”, by Neter, Wasserman, and Kutner (1985), as well as books by Box et al., Cox, Lindman, Searle, and other authors.
The accuracy criterion for all benchmarks presented below are either the respective published sources or, where applicable, the internal consistency of the results.
To the best of our knowledge, STATISTICA is the only statistics package available on the market which has successfully passed every test included in this set of benchmarks (and some tests reported here cannot be passed by any program other than STATISTICA).
* We are grateful to Dr. Lynn Brecht (UCLA), Dr. John Castellan (Indiana University), Dr. Elazar Pedhazur (New York University), Dr. Dallas Johnson (Kansas State University), Dr. Geoffrey Keppel (University of California, Berkeley), Dr. Michael Kutner (Emory University), Dr. George Milliken (Kansas State University), Dr. Paul Switzer (Stanford University), Dr. William Wasserman (Syracuse University), Dr. Thomas Wickens (UCLA), and Dr. Arthur Woodward (UCLA) for their advice, and for recommending to us some of the datasets used in these validation benchmarks, and to Drs. A. Woodward and L. Brecht for allowing us to use datasets from the technical documentation for Ganova. We are also grateful to all those researchers and practitioners who generously provided us with their raw datasets and allowed us to use them in the validation benchmarks. We would appreciate readers’ suggestions concerning any additional benchmarks which could be included in this set.