103 | | Caution: Running-povray process can not be divided very well for kerrighed to migrate. [[BR]] |
| 104 | Problem2: Running-povray process can not be divided very well for kerrighed to migrate. [[BR]] |
| 105 | |
| 106 | Conclusion: |
| 107 | Is kerrighed getting more powerful than parallel computing by comparing with the attached files ? We find that even if you get more processors or nodes adding to your task, but it does't guarantee the performance as you wish. It seems to have an optimized nodes numbers (CPUs) to get to the extra performance, so it can be well-suited for making different kinds of experiments case by case. [[BR]] |
| 108 | |
| 109 | MPI processes are deployed and managed by the current scheduler: they therefore theoretically can be migrated to balance the load on all nodes, and be deployed in an unexpected way if other programs are running on the cluster.The first test that we made with MPI on Kerrighed had the role to see if Kerrighed and MPI can run independently and if Kerrighed introduces some great overhead in the MPI communication. The application did only a lot of broadcasts and message sending between processes. [[BR]] |