68 | | Solution: OpenMP + POVRAY (Testing) : [[BR]] |
69 | | Step1: rider@node101:~$ '''krgcapset -d +DISTANT_FORK,USE_INTRA_CLUSTER_KERSTREAMS''' [[BR]] |
| 69 | '''Solution2''': [[BR]] |
| 70 | When running an MPI application on Kerrighed, be sure to : [[BR]] |
| 71 | 1 - You have only "localhost" in your node list file [[BR]] |
| 72 | 2 - You do not create local process with mpirun ("-nolocal" option with MPICH) [[BR]] |
| 73 | 3 - You have compiled MPICH with RSH_COMMAND = "'krg_rsh" [[BR]] |
| 74 | 4 - Be sure the Kerrighed scheduler is loaded (modules cpu_scheduler2, etc) [[BR]] |
| 75 | 5 - Be sure to enable process distant fork and use of kerrighed dynamic streams (in the terminal you launch MPI applications in, use the shell command krg_capset -d +DISTANT_FORK,USE_INTRA_CLUSTER_KERSTREAMS) [[BR]] |
| 76 | |
| 77 | rider@node101:~$ krgcapset -d +DISTANT_FORK,USE_INTRA_CLUSTER_KERSTREAMS,CAN_MIGRATE [[BR]] |
| 78 | |
| 79 | Reference URL: [[BR]] |
| 80 | http://131.254.254.17/mpi.php [[BR]] |
| 81 | http://kerrighed.org/forum/viewtopic.php?t=42 [[BR]] |
| 82 | |
| 83 | Failure: Running-povray process can not be divided to several threads for kerrighed to migrate. [[BR]] |