View on GitHub


Source website for Supercomputing Conference Series reproducibility resources

Artifact Evaluation Appendix: [Paper Title]

Provided only with the final submission: [Author Name] [Department] [Institution] [Email] [Phone]

Artifact Evaluation (formerly, Computational Results Analysis) Appendix:

Instructions: This form enables authors to provide details about verification and validation efforts that help improve the trustworthiness of their computational results. We are interested in techniques that authors use for pre-execution, peri-execution and post-execution analysis.

Objective: Use this template to produce a 1 – 4 page appendix for your submission. Describe the approaches you use to improve trustworthiness [1] of your results. Provide an analysis of your results that demonstrate correct execution.

Examples: _ Some types of diagnostics could be:_

– Remove the above section before submitting –

A. Artifact Evaluation Appendix: [Paper Title]

A.1 Abstract

A.2 Results Analysis Discussion

A.3 Summary

A.4 Notes

Optional comments.

[1] We use the term trustworthiness to indicate a broad category of ways that can be used to demonstrate that your software is behaving the way you intend it to behave. We expect that most of the approaches you will use are focused on _verification, _ensuring that your software computes the answer you expect it to compute (doing things right). But we leave open the possibility that you also do some validation testing (doing the right thing).