close
Warning:
Can't synchronize with repository "(default)" (Unsupported version control system "svn": libnettle.so.4: failed to map segment from shared object: Cannot allocate memory). Look in the Trac log for more information.
- Timestamp:
-
Mar 31, 2008, 11:03:24 PM (18 years ago)
- Author:
-
rider
- Comment:
-
--
Legend:
- Unmodified
- Added
- Removed
- Modified
-
|
v16
|
v17
|
|
| 102 | 102 | http://www.metz.supelec.fr/metz/recherche/ersidp/Publication/OnLineFiles/05-Ifrim-2004-05.pdf [[BR]] |
| 103 | 103 | |
| 104 | | Problem2: Running-povray process can not be divided very well for kerrighed to migrate. [[BR]] |
| | 104 | Problem2: Running-povray process cannot fork very well for kerrighed to migrate. [[BR]] |
| 105 | 105 | |
| 106 | | Conclusion: |
| | 106 | '''Conclusion''': [[BR]] |
| 107 | 107 | Is kerrighed getting more powerful than parallel computing by comparing with the attached files ? We find that even if you get more processors or nodes adding to your task, but it does't guarantee the performance as you wish. It seems to have an optimized nodes numbers (CPUs) to get to the extra performance, so it can be well-suited for making different kinds of experiments case by case. [[BR]] |
| | 108 | |
| | 109 | MPI uses (by default) sockets in order to allow the communication between processes on different machines, Kerrighed installed system must be instructed with the available Kerrighed capabilities to link the created sockets to Kernet streams and to use a special rsh program for process deployment - krg-rsh. The MPI library and the runtime will be used unchanged. With the configurations mentioned above, Kerrighed scheduler will menage the deployment of the MPI program. [[BR]] |
| 108 | 110 | |
| 109 | 111 | MPI processes are deployed and managed by the current scheduler: they therefore theoretically can be migrated to balance the load on all nodes, and be deployed in an unexpected way if other programs are running on the cluster.The first test that we made with MPI on Kerrighed had the role to see if Kerrighed and MPI can run independently and if Kerrighed introduces some great overhead in the MPI communication. The application did only a lot of broadcasts and message sending between processes. [[BR]] |
| 110 | 112 | |
| 111 | | @Performance Report: For the attached file: mpi-povray performance_result.odt |
| | 113 | We may have more other tests to try in order to see if we can take benefit of the combination of MPI and IPC on the Kerrighed cluster upon to our usage scenario. [[BR]] |
| | 114 | |
| | 115 | ### Performance note: For the attached file: mpi-povray performance_result.odt [[BR]] |