Continous Integration / Testing
Add several tests (travis? jenkins?)
Very Small (one iteration, 20x20x20 cube) mpirun -np 1 bin/mpi+omp.app -s 0 -n 20
Small (100 iterations, 20x20x20 cube) mpirun -np 4 bin/mpi+omp.app -s 100 -n 20
Medium (10 iterations, 100x100x100 cube) mpirun -np 4 bin/mpi+omp.app -s 10 -n 100
Large (10 iterations, 300x300x300 cube) mpirun -np 16 bin/mpi+omp.app -s 10 -n 300
We should look at the total number of neighbors, avg neighbors per particles, total internal energy and total energy values. It is expected that GPU runs have slightly different values. In the long run, they might have slightly different number of neighbors as well.
I think this is a good start. If possible, we should repeat the test with: gcc, clang, cray cce, intel and pgi
And with the following models: mpi+omp mpi+omp+target mpi+omp+acc mpi+omp+cuda
The test should be a success if mpi+omp and mpi+omp+cuda models are passing. We know already that on Daint target and acc are not well supported by all compilers. acc should work with pgi, and target should work with cray cce (perhaps clang as well?), gcc will run it on the CPU...
Bonus: It would be super interesting to get a table with the runtimes and failure / success of the tests with the Large scenario.