= MPI-Povray Performance Report = After running all of these povray cases under the path : /home/rider/scenes (default:/opt/povray31/scenes) , i have choose 5 (*.pov)files which are the highest cpu consumption(最耗運算資源-光影相關運算)for mpi-povray to run performance test.[[BR]] All of the rendered files had been put in the /home/rider/povray_demo directory.[[BR]] Rendered files for the performance experiment are listed below... /home/rider/scenes/advanced/woodbox.pov [[BR]] /home/rider/objects/pawns.pov [[BR]] /home/rider/scenes/advanced/whiltile.pov [[BR]] /home/rider/scenes/interior/ballbox.pov [[BR]] /home/rider/scenes/advanced/quilt1.pov [[BR]] == Experiment Case == We have 4 experiment cases:[[BR]] case1: 1024 x 768[[BR]] case2: 2048 x 1536[[BR]] case3: 4096 x 3072[[BR]] case4: 8192 x 6144[[BR]] '''MPI-POVRAY - MPI Only'''[[BR]] Machinefile:[[BR]] #MPI Machinefile [[BR]] node1:4 [[BR]] node2:4 [[BR]] node3:4 [[BR]] node4:4 [[BR]] node5:4 [[BR]] node6:4 [[BR]] node7:4 [[BR]] # End of Machinefile [[BR]] case1.1: Resolution 1024 x 768: Running 5 *.pov continuously with np7 takes '''16''' secs to finish rendering[[BR]] case2.1: Resolution 2048 x 1536: Running 5 *.pov continuously with np7 takes '''62''' secs to finish rendering[[BR]] case3.1: Resolution 4096 x 3072: Running 5 *.pov continuously with np7 takes '''241''' secs to finish rendering[[BR]] case4.1: Resolution 8192 x 6144: Running 5 *.pov continuously with np7 takes '''935''' secs to finish rendering[[BR]] case1.2: Resolution 1024 x 768: Running 5 *.pov continuously with np14 takes '''10''' secs to finish rendering[[BR]] case2.2: Resolution 2048 x 1536: Running 5 *.pov continuously with np14 takes '''43''' secs to finish rendering[[BR]] case3.2: Resolution 4096 x 3072: Running 5 *.pov continuously with np14 takes '''161''' secs to finish rendering[[BR]] case4.2: Resolution 8192 x 6144: Running 5 *.pov continuously with np14 takes '''601''' secs to finish rendering[[BR]] case1.3: Resolution 1024 x 768: Running 5 *.pov continuously with np28 takes '''13''' secs to finish rendering[[BR]] case2.3: Resolution 2048 x 1536: Running 5 *.pov continuously with np28 takes '''50''' secs to finish rendering[[BR]] case3.3: Resolution 4096 x 3072: Running 5 *.pov continuously with np28 takes '''185''' secs to finish rendering[[BR]] case4.3: Resolution 8192 x 6144: Running 5 *.pov continuously with np28 takes '''695''' secs to finish rendering[[BR]] '''MPI-POVRAY - MPI + Kerrighed'''[[BR]] #MPI+Kerrighed Machinefile [[BR]] node1:4 [[BR]] node1:4 [[BR]] node1:4 [[BR]] node1:4 [[BR]] node1:4 [[BR]] node1:4 [[BR]] node1:4 [[BR]] # End of Machinefile [[BR]] Or #MPI+Kerrighed Machinefile [[BR]] node1:28 [[BR]] # End of Machinefile [[BR]] Result: Using this machinefile,we only get one node running povary process by 4 CPUs and the others are nothing to do. [[BR]] '''Solution1''': [[BR]] Step1: rider@node101:~$ krg_capset -e +CAN_MIGRATE [[BR]] Step2: rider@node101:~$ migrate pid nodeid [[BR]] Failure: Running-povray process can not be divided to several threads for kerrighed to migrate. [[BR]] '''Solution2''': [[BR]] When running an MPI application on Kerrighed, be sure to : [[BR]] 1 - You have only "localhost" in your node list file [[BR]] 2 - You do not create local process with mpirun ("-nolocal" option with MPICH) [[BR]] 3 - You have compiled MPICH with RSH_COMMAND = "'krg_rsh" [[BR]] 4 - Be sure the Kerrighed scheduler is loaded (modules cpu_scheduler2, etc) [[BR]] 5 - Be sure to enable process distant fork and use of kerrighed dynamic streams (in the terminal you launch MPI applications in, use the shell command krg_capset -d +DISTANT_FORK,USE_INTRA_CLUSTER_KERSTREAMS) [[BR]] rider@node101:~$ krgcapset -d +DISTANT_FORK,USE_INTRA_CLUSTER_KERSTREAMS,CAN_MIGRATE [[BR]] Reference URL: [[BR]] http://131.254.254.17/mpi.php [[BR]] http://kerrighed.org/forum/viewtopic.php?t=42 [[BR]] Failure: Running-povray process can not be divided to several threads for kerrighed to migrate. [[BR]] @Performance Report: For the attached file: mpi-povray performance_result.odt