= MPI-Povray Performance Report = After running all of these povray cases under the path : /home/rider/scenes (default:/opt/povray31/scenes) , i have choose 5 (*.pov)files which are the highest cpu consumption(最耗運算資源-光影相關運算)for mpi-povray to run performance test.[[BR]] All of the rendered files had been put in the /home/rider/povray_demo directory.[[BR]] Rendered files for the performance experiment are listed below... /home/rider/scenes/advanced/woodbox.pov [[BR]] /home/rider/objects/pawns.pov [[BR]] /home/rider/scenes/advanced/whiltile.pov [[BR]] /home/rider/scenes/interior/ballbox.pov [[BR]] /home/rider/scenes/advanced/quilt1.pov [[BR]] == Experiment Case == We have 4 experiment cases:[[BR]] case1: 1024 x 768[[BR]] case2: 2048 x 1536[[BR]] case3: 4096 x 3072[[BR]] case4: 8192 x 6144[[BR]] '''MPI-POVRAY - MPI Only'''[[BR]] Machinefile:[[BR]] #MPI Machinefile [[BR]] node1:4 [[BR]] node2:4 [[BR]] node3:4 [[BR]] node4:4 [[BR]] node5:4 [[BR]] node6:4 [[BR]] node7:4 [[BR]] # End of Machinefile [[BR]] case1.1: Resolution 1024 x 768: Running 5 *.pov continuously with np7 takes '''16''' secs to finish rendering[[BR]] case2.1: Resolution 2048 x 1536: Running 5 *.pov continuously with np7 takes '''62''' secs to finish rendering[[BR]] case3.1: Resolution 4096 x 3072: Running 5 *.pov continuously with np7 takes '''241''' secs to finish rendering[[BR]] case4.1: Resolution 8192 x 6144: Running 5 *.pov continuously with np7 takes '''935''' secs to finish rendering[[BR]] case1.2: Resolution 1024 x 768: Running 5 *.pov continuously with np14 takes '''10''' secs to finish rendering[[BR]] case2.2: Resolution 2048 x 1536: Running 5 *.pov continuously with np14 takes '''43''' secs to finish rendering[[BR]] case3.2: Resolution 4096 x 3072: Running 5 *.pov continuously with np14 takes '''161''' secs to finish rendering[[BR]] case4.2: Resolution 8192 x 6144: Running 5 *.pov continuously with np14 takes '''601''' secs to finish rendering[[BR]] case1.3: Resolution 1024 x 768: Running 5 *.pov continuously with np28 takes '''13''' secs to finish rendering[[BR]] case2.3: Resolution 2048 x 1536: Running 5 *.pov continuously with np28 takes '''50''' secs to finish rendering[[BR]] case3.3: Resolution 4096 x 3072: Running 5 *.pov continuously with np28 takes '''185''' secs to finish rendering[[BR]] case4.3: Resolution 8192 x 6144: Running 5 *.pov continuously with np28 takes '''695''' secs to finish rendering[[BR]] '''MPI-POVRAY - MPI + Kerrighed'''[[BR]] #MPI+Kerrighed Machinefile [[BR]] node1:4 [[BR]] node1:4 [[BR]] node1:4 [[BR]] node1:4 [[BR]] node1:4 [[BR]] node1:4 [[BR]] node1:4 [[BR]] # End of Machinefile [[BR]] Result: Using this machinefile,we only get one node running povary process by 4 CPUs and the others are nothing to do. [[BR]] Step1: rider@node101:~$ krg_capset -e +CAN_MIGRATE [[BR]] Step2: rider@node101:~$ migrate pid nodeid [[BR]] Failure: Running-povray process can not be divided to several threads for kerrighed to migrate. [[BR]] Solution: OpenMP + POVRAY (Testing) : Step1: rider@node101:~$ '''krgcapset -d +DISTANT_FORK,USE_INTRA_CLUSTER_KERSTREAMS''' @Performance Report: For the attached file: mpi-povray performance_result.odt