Reproducibility is essential for scientific progress and engineering advances. Even so, many published computational results lack sufficient capture and description of companion information that would enable subsequent confirmation and extension of the results. Certainly, most scientists intend to publish correct results, but without sufficient rigor in computational processes and practices, risk is unnecessarily high that results will occasionally be wrong and will always be costly to confirm and extend.
The reasons for inadequate reproducibility are fundamentally matters of incentives and costs. In recent years, because of the availability of improved software platforms from GitHub, GitLab, Atlassian and others, and container environments such as Docker, the cost of capturing and describing the computing environment used to produce scientific results has dramatically decreased. Furthermore, new workflows and skill-building opportunities are available for those who are interested in improving their practice. What needs further attention is our incentive system.
In this presentation, we discuss efforts to improve computational reproducibility by fostering and promoting changes to our incentive systems. We talk about efforts to increase reproducibility expectations by publishers, funding agencies, employers and the broader computational scientific community. By improving incentives to produce reproducible results, providing recognition for those who lead the community and providing conduits for effective exchange of best practices, we can expect and make reproducibility indispensable.