BUTTER - Empirical Deep Learning Dataset
The BUTTER Empirical Deep Learning Dataset represents an empirical study of the deep learning phenomena on dense fully connected networks, scanning across thirteen datasets, eight network shapes, fourteen depths, twenty-three network sizes (number of trainable parameters), four learning rates, six minibatch sizes, four levels of label noise, and fourteen levels of L1 and L2 regularization each. Multiple repetitions (typically 30, sometimes 10) of each combination of hyperparameters were preformed, and statistics including training and test loss (using a 80% / 20% shuffled train-test split) are recorded at the end of each training epoch. In total, this dataset covers 178 thousand distinct hyperparameter settings ("experiments"), 3.55 million individual training runs (an average of 20 repetitions of each experiments), and a total of 13.3 billion training epochs (three thousand epochs were covered by most runs). Accumulating this dataset consumed 5,448.4 CPU core-years, 17.8 GPU-years, and 111.2 node-years.
Citation Formats
National Renewable Energy Laboratory. (2022). BUTTER - Empirical Deep Learning Dataset [data set]. Retrieved from https://dx.doi.org/10.25984/1872441.
Tripp, Charles, Perr-Sauer, Jordan, Hayne, Lucas, and Lunacek, Monte. BUTTER - Empirical Deep Learning Dataset. United States: N.p., 20 May, 2022. Web. doi: 10.25984/1872441.
Tripp, Charles, Perr-Sauer, Jordan, Hayne, Lucas, & Lunacek, Monte. BUTTER - Empirical Deep Learning Dataset. United States. https://dx.doi.org/10.25984/1872441
Tripp, Charles, Perr-Sauer, Jordan, Hayne, Lucas, and Lunacek, Monte. 2022. "BUTTER - Empirical Deep Learning Dataset". United States. https://dx.doi.org/10.25984/1872441. https://data.openei.org/submissions/5708.
@div{oedi_5708, title = {BUTTER - Empirical Deep Learning Dataset}, author = {Tripp, Charles, Perr-Sauer, Jordan, Hayne, Lucas, and Lunacek, Monte.}, abstractNote = {The BUTTER Empirical Deep Learning Dataset represents an empirical study of the deep learning phenomena on dense fully connected networks, scanning across thirteen datasets, eight network shapes, fourteen depths, twenty-three network sizes (number of trainable parameters), four learning rates, six minibatch sizes, four levels of label noise, and fourteen levels of L1 and L2 regularization each. Multiple repetitions (typically 30, sometimes 10) of each combination of hyperparameters were preformed, and statistics including training and test loss (using a 80% / 20% shuffled train-test split) are recorded at the end of each training epoch. In total, this dataset covers 178 thousand distinct hyperparameter settings ("experiments"), 3.55 million individual training runs (an average of 20 repetitions of each experiments), and a total of 13.3 billion training epochs (three thousand epochs were covered by most runs). Accumulating this dataset consumed 5,448.4 CPU core-years, 17.8 GPU-years, and 111.2 node-years.}, doi = {10.25984/1872441}, url = {https://data.openei.org/submissions/5708}, journal = {}, number = , volume = , place = {United States}, year = {2022}, month = {05}}
https://dx.doi.org/10.25984/1872441
Details
Data from May 20, 2022
Last updated Jan 2, 2024
Submitted Jun 15, 2022
Organization
National Renewable Energy Laboratory
Contact
Charles Edison Tripp
303.275.4082
Authors
Research Areas
Keywords
neural networks, machine learning, training, benchmark, deep learning, empirical deep learning, empirical machine learning, empirical, learning rate, batch size, minibatch size, regularization, label noise, depth, shape, topology, network shape, network topology, epoch, training epoch, neural architecture searchDOE Project Details
Project Name National Renewable Energy Laboratory (NREL) Lab Directed Research and Development (LDRD)
Project Number GO0028308