Changes between Version 16 and Version 17 of krg_performance


Ignore:
Timestamp:
Mar 31, 2008, 11:03:24 PM (17 years ago)
Author:
rider
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • krg_performance

    v16 v17  
    102102http://www.metz.supelec.fr/metz/recherche/ersidp/Publication/OnLineFiles/05-Ifrim-2004-05.pdf [[BR]]
    103103
    104 Problem2: Running-povray process can not be divided very well for kerrighed to migrate. [[BR]]
     104Problem2: Running-povray process cannot fork very well for kerrighed to migrate. [[BR]]
    105105
    106 Conclusion:
     106'''Conclusion''': [[BR]]
    107107   Is kerrighed getting more powerful than parallel computing by comparing with the attached files ? We find that even if you get more processors or nodes adding to your task, but it does't guarantee the performance as you wish. It seems to have an optimized nodes numbers (CPUs) to get to the extra performance, so it can be well-suited for making different kinds of experiments case by case. [[BR]]
     108
     109   MPI uses (by default) sockets in order to allow the communication between processes on different machines, Kerrighed installed system must be instructed with the available Kerrighed capabilities to link the created sockets to Kernet streams and to use a special rsh program for process deployment - krg-rsh. The MPI library and the runtime will be used unchanged. With the configurations mentioned above, Kerrighed scheduler will menage the deployment of the MPI program. [[BR]]
    108110
    109111   MPI processes are deployed and managed by the current scheduler: they therefore theoretically can be migrated to balance the load on all nodes, and be deployed in an unexpected way if other programs are running on the cluster.The first test that we made with MPI on Kerrighed had the role to see if Kerrighed and MPI can run independently and if Kerrighed introduces some great overhead in the MPI communication. The application did only a lot of broadcasts and message sending between processes. [[BR]]
    110112
    111 @Performance Report: For the attached file: mpi-povray performance_result.odt
     113   We may have more other tests to try in order to see if we can take benefit of the combination of MPI and IPC on the Kerrighed cluster upon to our usage scenario. [[BR]]
     114
     115### Performance note: For the attached file: mpi-povray performance_result.odt [[BR]]