- /usr/src/linux/Documentation/iostats.txt
On 2.4 you might execute "grep 'hda ' /proc/partitions". On 2.6, you have a choice of "cat /sys/block/hda/stat" or "grep 'hda ' /proc/diskstats".
2.6 diskstats: 3 0 hda 446216 784926 9550688 4382310 424847 312726 5922052 19310380 0 3376340 23705160 Field 1 -- # of reads issued This is the total number of reads completed successfully. Field 2 -- # of reads merged, field 6 -- # of writes merged Reads and writes which are adjacent to each other may be merged for efficiency. Thus two 4K reads may become one 8K read before it is ultimately handed to the disk, and so it will be counted (and queued) as only one I/O. This field lets you know how often this was done. Field 3 -- # of sectors read This is the total number of sectors read successfully. Field 4 -- # of milliseconds spent reading This is the total number of milliseconds spent by all reads (as measured from __make_request() to end_that_request_last()). Field 5 -- # of writes completed This is the total number of writes completed successfully. Field 7 -- # of sectors written This is the total number of sectors written successfully. Field 8 -- # of milliseconds spent writing This is the total number of milliseconds spent by all writes (as measured from __make_request() to end_that_request_last()). Field 9 -- # of I/Os currently in progress The only field that should go to zero. Incremented as requests are given to appropriate request_queue_t and decremented as they finish. Field 10 -- # of milliseconds spent doing I/Os This field is increases so long as field 9 is nonzero. Field 11 -- weighted # of milliseconds spent doing I/Os This field is incremented at each I/O start, I/O completion, I/O merge, or read of these stats by the number of I/Os in progress (field 9) times the number of milliseconds spent doing I/O since the last update of this field. This can provide an easy measure of both I/O completion time and the backlog that may be accumulating. 3 1 hda1 35486 38030 38030 38030 Field 1 -- # of reads issued This is the total number of reads issued to this partition. Field 2 -- # of sectors read This is the total number of sectors requested to be read from this partition. Field 3 -- # of writes issued This is the total number of writes issued to this partition. Field 4 -- # of sectors written This is the total number of sectors requested to be written to this partition.
- 為了讓系統的記憶體完全用在 amira, 因此我把一些不必要的 service 全部關掉
rock@cloud:~> sudo /etc/init.d/dbus stop rock@cloud:~> sudo /etc/init.d/earlysyslog stop rock@cloud:~> sudo /etc/init.d/jexec stop rock@cloud:~> sudo /etc/init.d/random stop rock@cloud:~> sudo /etc/init.d/resmgr stop rock@cloud:~> sudo /etc/init.d/consolekit stop rock@cloud:~> sudo /etc/init.d/haldaemon stop rock@cloud:~> sudo /etc/init.d/avahi-daemon stop rock@cloud:~> sudo /etc/init.d/syslog stop rock@cloud:~> sudo /etc/init.d/auditd stop rock@cloud:~> sudo /etc/init.d/avahi-dnsconfd stop rock@cloud:~> sudo /etc/init.d/portmap stop rock@cloud:~> sudo /etc/init.d/webmin stop rock@cloud:~> sudo /etc/init.d/smbfs stop rock@cloud:~> sudo /etc/init.d/ypserv stop rock@cloud:~> sudo /etc/init.d/alsasound stop rock@cloud:~> sudo /etc/init.d/cups stop rock@cloud:~> sudo /etc/init.d/irq_balancer stop rock@cloud:~> sudo /etc/init.d/kbd stop rock@cloud:~> sudo /etc/init.d/mysql stop rock@cloud:~> sudo /etc/init.d/powersaved stop rock@cloud:~> sudo /etc/init.d/ypbind stop rock@cloud:~> sudo /etc/init.d/yppasswdd stop rock@cloud:~> sudo /etc/init.d/ypxfrd stop rock@cloud:~> sudo /etc/init.d/dhcpd stop rock@cloud:~> sudo /etc/init.d/nscd stop rock@cloud:~> sudo /etc/init.d/postfix stop rock@cloud:~> sudo /etc/init.d/cron stop rock@cloud:~> sudo /etc/init.d/nfsserver stop rock@cloud:~> sudo /etc/init.d/smartd stop rock@cloud:~> sudo /etc/init.d/xinetd stop rock@cloud:~> sudo /etc/init.d/gpm stop rock@cloud:~> sudo /etc/init.d/xdm stop rock@cloud:~> pstree init─┬─httpd2-prefork───6*[httpd2-prefork] ├─6*[mingetty] ├─sshd───sshd───sshd───bash───pstree ├─udevd └─xfs rock@cloud:~> free -m total used free shared buffers cached Mem: 1996 912 1083 0 24 757 -/+ buffers/cache: 130 1865 Swap: 4102 6 4096
- 首先啟動 TurboVNC 在 :1 然後執行 amira.sh 啟動 Amira
rock@cloud:~> /opt/TurboVNC/bin/vncserver New 'X' desktop is cloud:1 Starting applications specified in /home/rock/.vnc/xstartup Log file is /home/rock/.vnc/cloud:1.log rock@cloud:~> /home/rock/3D_Model/TestCase/amira.sh /home/rock/3D_Model/Fly_Sample/sample.hx Executing /home/rock/Amira4.1.1/bin/start... Using arch-LinuxAMD64-Optimize ... Xlib: extension "GLX" missing on display ":1.0". Xlib: extension "GLX" missing on display ":1.0". Xlib: extension "GLX" missing on display ":1.0". Xlib: extension "GLX" missing on display ":1.0".
- [問題一] 目前很可能 Nvidia 的 driver GLX 設定有錯 - [待解]
rock@cloud:~> vglrun -d :1 /home/rock/3D_Model/TestCase/amira.sh /home/rock/3D_Model/Fly_Sample/sample.hx Executing /home/rock/Amira4.1.1/bin/start... Using arch-LinuxAMD64-Optimize ... Xlib: extension "GLX" missing on display ":1.0". Xlib: extension "GLX" missing on display ":1.0". [VGL] ERROR: in remove-- [VGL] 115: Invalid argument
- [問題二] Lustre 吃超多系統記憶體??!!
rock@cloud02:~> pstree | rock@cloud01:~> pstree init─┬─6*[mingetty] | init─┬─6*[mingetty] ├─sshd───sshd───sshd───bash───pstree | ├─sshd───sshd───sshd───bash───pstree ├─udevd | ├─udevd └─xfs | └─xfs rock@cloud02:~> free -m | rock@cloud01:~> free -m total used free shared buffers cached | total used free shared buffers cached Mem: 1988 962 1026 0 162 678 | Mem: 1988 1830 157 0 107 1626 -/+ buffers/cache: 121 1866 | -/+ buffers/cache: 96 1891 Swap: 4102 0 4102 | Swap: 4102 0 4102 rock@cloud02:~> mount | rock@cloud01:~> mount /dev/sda1 on / type ext3 (rw,acl,user_xattr) | /dev/sda1 on / type ext3 (rw,acl,user_xattr) proc on /proc type proc (rw) | proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) | sysfs on /sys type sysfs (rw) debugfs on /sys/kernel/debug type debugfs (rw) | debugfs on /sys/kernel/debug type debugfs (rw) udev on /dev type tmpfs (rw) | udev on /dev type tmpfs (rw) devpts on /dev/pts type devpts (rw,mode=0620,gid=5) | devpts on /dev/pts type devpts (rw,mode=0620,gid=5) /dev/sda6 on /disk type ext3 (rw,acl,user_xattr) | /dev/sda6 on /disk type ext3 (rw,acl,user_xattr) securityfs on /sys/kernel/security type securityfs (rw) | securityfs on /sys/kernel/security type securityfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) | none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) ----------------------------------------------------------------------------| cloud@tcp0:/flyfs on /home/flyfs type lustre (rw)
- [問題三] 當要重開 Lustre 的 OST 時, 如果還有其他掛載點仍掛載著 Lustre 的 File System, 系統就無法重開????!!!!
Last modified 16 years ago
Last modified on Aug 26, 2008, 10:51:10 PM
Attachments (1)
-
Lustre_Mem.png
(87.7 KB) -
added by jazz 16 years ago.
Lustre Use a lot of memory ?!
Download all attachments as: .zip