Artifact Evaluation Appendix: [Paper Title]
Provided only with the final submission: [Author Name] [Department] [Institution] [Email] [Phone]
Artifact Evaluation (formerly, Computational Results Analysis) Appendix: sc-ae-appendix-20180327.zip
Instructions: This form enables authors to provide details about verification and validation efforts that help improve the trustworthiness of their computational results. We are interested in techniques that authors use for pre-execution, peri-execution and post-execution analysis.
Objective: Use this template to produce a 1 – 4 page appendix for your submission. Describe the approaches you use to improve trustworthiness [1] of your results. Provide an analysis of your results that demonstrate correct execution.
Examples: _ Some types of diagnostics could be:_
- Validate timers (time something with a known execution time, determine the precision and statistical variability of the timer).
- Confirm results for a manufactured solution, executed before or during the production simulation.
- Test analytics of the problem, e.g., generate a problem with known spectral properties and test its behavior.
– Remove the above section before submitting –
A. Artifact Evaluation Appendix: [Paper Title]
A.1 Abstract
- Overall description of your approach and how trustworthiness of your results is improved.
- For example: How you validated timers, what manufactured solution or spectral properties you leveraged, etc.
A.2 Results Analysis Discussion
- Description of results, their correctness and any concerns about them. If your paper is only about performance, describe how you assure the quality of performance measurements and that you have preserved correct computational results.
A.3 Summary
- Final summary demonstrating the trustworthiness of your results.
A.4 Notes
Optional comments.
[1] We use the term trustworthiness to indicate a broad category of ways that can be used to demonstrate that your software is behaving the way you intend it to behave. We expect that most of the approaches you will use are focused on _verification, _ensuring that your software computes the answer you expect it to compute (doing things right). But we leave open the possibility that you also do some validation testing (doing the right thing).