wiki:KVM-OpenNebula

Version 11 (modified by rider, 14 years ago) (diff)

--

KVM + OpenNebula = Virtual Cluster Deployment


【系統環境】

  • 硬體資源
 CPU Memory Disk
Spec Intel(R) Core(TM)2 Quad CPU Q9400 @ 2.66GHz 8GB 1TB
  • 軟體資源
Host/dom0 OS Debian GNU/Linux testing (squeeze) (64bit)
KVM 72+dfsg-5+squeeze1
VM/Guest/dmoU OS MS Windows XP & Debian lenny (AMD64)
OpenNebula 1.4.0

【Step 1: 檢查 KVM 及其所需環境】

  • 請先確認 CPU 有支援 (Intel vmx 或 AMD svm 指令集)
    $ egrep '(vmx|svm)' --color=always /proc/cpuinfo
    
    flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm tpr_shadow vnmi flexpriority
    flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm tpr_shadow vnmi flexpriority
    flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm tpr_shadow vnmi flexpriority
    flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm tpr_shadow vnmi flexpriority
    

【Step 2: 安裝 KVM 及其所需要的套件並載入模組】

  • 安裝KVM及其所需相關工具
    $ sudo apt-get install kvm qemu-kvm bridge-utils libvirt-bin virtinst vtun virt-manager
    
  • 檢查 KVM 模組: kvm-intel 是給 Intel CPU,kvm-amd 是給 AMD CPU
    $ sudo modprobe -l | grep kvm
    kernel/arch/x86/kvm/kvm.ko
    kernel/arch/x86/kvm/kvm-intel.ko
    kernel/arch/x86/kvm/kvm-amd.ko
    
  • 載入 KVM module for Intel Chip
    $ sudo modprobe kvm-intel
    

【Step 3: 安裝 OpenNebula 及其所需要的套件】

  • pcXpcY 須安裝相關套件 (假設有兩台機器要串起來)
    $ sudo aptitude install g++ ruby libsqlite3-0 sqlite3 libsqlite3-dev libsqlite3-ruby libxmlrpc-c3-dev libxmlrpc-c3 libssl-dev scons
    

【Step 4: 下載並安裝OpenNebula】

  • 在 pcX 上執行 (OpenNebula 只需安裝在 pcX 上)
  • 下載 source code
    $ cd
    $ wget http://dev.opennebula.org/attachments/download/103/one-1.4.0.tar.gz
    $ tar zxvf one-1.4.0.tar.gz
    
  • 編譯和安裝 OpenNebula
    $ cd one-1.4
    $ sudo scons
    $ sudo mkdir /home/one
    $ sudo ./install.sh -d /home/one
    
  • 編輯 OpenNebula 路徑
    $ sudo su
    
    # echo export ONE_LOCATION=/home/one >> ~/.bashrc
    # echo export ONE_XMLRPC="http://localhost:2633/RPC2" >> ~/.bashrc
    # echo export PATH='$ONE_LOCATION/bin:$PATH' >> ~/.bashrc
    # echo export ONE_AUTH=/home/one/.one/one_auth >> ~/.bashrc 
    
    # mkdir /home/one/.one
    # echo "root:cloud123" >> /home/one/.one/one_auth
    
    # source ~/.bashrc
    # echo $ONE_AUTH
    (測試 $ONE_AUTH 路徑是否存在)
    
    # echo $ONE_LOCATION 
    (測試 $ONE_LOCATION 路徑是否存在)
    

【Step 5: 編輯 ONE 設定檔】

  • 在 pcX 上編輯,將 62~65 & 106~110 & 151~154 行註解掉,打開 70~73 & 115~119 & 159~162 行
    # cd /home/one
    # gedit etc/oned.conf
    
    21 HOST_MONITORING_INTERVAL = 5
    23 VM_POLLING_INTERVAL      = 10
    
    62 #IM_MAD = [
    63 #    name       = "im_xen",
    64 #    executable = "one_im_ssh",
    65 #    arguments  = "im_xen/im_xen.conf" ]
    
    70 IM_MAD = [
    71       name       = "im_kvm",
    72       executable = "one_im_ssh",
    73       arguments  = "im_kvm/im_kvm.conf" ]
    
    106 #VM_MAD = [
    107 #    name       = "vmm_xen",
    108 #    executable = "one_vmm_xen",
    109 #    default    = "vmm_xen/vmm_xen.conf",
    110 #    type       = "xen" ]
    
    115 VM_MAD = [
    116     name       = "vmm_kvm",
    117     executable = "one_vmm_kvm",
    118     default    = "vmm_kvm/vmm_kvm.conf",
    119     type       = "kvm" ]
    
    151 #    TM_MAD = [                
    152 #    name       = "tm_ssh",
    153 #    executable = "one_tm",
    154 #    arguments  = "tm_ssh/tm_ssh.conf" ]
    
    159      TM_MAD = [               
    160      name       = "tm_nfs",                                               
    161      executable = "one_tm",
    162      arguments  = "tm_nfs/tm_nfs.conf" ]
    

【Step 6: 啟動 ONE】

  • 啟動 ONE 前的小叮嚀:
    1. 確認 pcX 和 pcY 都已經載入 KVM module
    2. 兩台都可讓 root 免密碼 ssh 登入
      // pcX (Server) 上執行
      # ssh-keygen
      # cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys
      # scp -r ~/.ssh pcY:~
      
      //測試 ssh免密碼登入
      ssh pcY
      [CTRL+D]
      
    3. 只需在 pcX 啟動 ONE 即可
  • 在 pcX 上執行
    # one start
    

【Step 7: 新增 pcX 和 pcY 到 ONE Pool】

  • 在 pcX 上執行
    # onehost add pcX im_kvm vmm_kvm tm_nfs
    # onehost add pcY im_kvm vmm_kvm tm_nfs
    
    # onehost list
     HID NAME                      RVM   TCPU   FCPU   ACPU     TMEM    FMEM STAT
       1 pcX                       0    400    399    400  1313856       0   on
       2 pcY                       0    400    399    400  1313856       0   on
    
    # onehost show pcX
    
  • onehost list參數說明
    • RVM - Number of running VMs
    • TCPU - Total CPU
    • FCPU - Free CPU
    • ACPU - Available CPU (not allocated by VMs)
    • TMEM - Total Memory
    • FMEM - Free Memory

【Step 8: 製作 VM】

製作 VM - 以 Microsoft Windows XP 為例

  • 產生一個 10GB 的 QEMU disk image format 的檔案
    $ sudo qemu-img create -f qcow2 xp.img 10G
    
  • 讀取光碟檔並開啟虛擬機器來安裝XP (10GB硬碟空間 & 1GB記憶體大小)
    $ sudo qemu-system-x86_64 -cdrom /home/clouder/xp.iso -hda xp.img -boot d -m 1024 -localtime -net nic -net tap
    
  • 開啟虛擬機器 (10GB硬碟空間 & 1GB記憶體大小)
    $ sudo qemu-system-x86_64 -hda xp.img -m 1024 -net nic -net tap
    

製作 VM - 以 Debian 為例

  • 產生一個 10GB 的 QEMU disk image format 的檔案
    $ sudo qemu-img create -f qcow2 deb.img 10G
    
  • 下載 Debian-5.0 ISO file
    $ wget http://cdimage.debian.org/debian-cd/5.0.6/amd64/iso-cd/debian-506-amd64-CD-1.iso
    
  • 讀取光碟檔並開啟虛擬機器來安裝Debian (10GB硬碟空間 & 1GB記憶體大小)
    $ sudo qemu-system-x86_64 -cdrom /home/clouder/debian-506-amd64-CD-1.iso -hda deb.img -boot d -m 1024 -localtime -net nic -net tap
    
  • 開啟虛擬機器 (10GB硬碟空間 & 1GB記憶體大小)
    $ sudo qemu-system-x86_64 -hda deb.img -m 1024 -net nic -net tap
    

【Step 9: 使用 ONE 來開啟 VM】

  • 先在 pcX 上面建立 network bridge (br0)
    # sudo vim /etc/network/interfaces
    
    # This file describes the network interfaces available on your system
    # and how to activate them. For more information, see interfaces(5).
    
    # The loopback network interface
    auto lo
    iface lo inet loopback
    
    # The primary network interface
    allow-hotplug eth0
    auto br0
    iface br0 inet static
            address xxx.xxx.xxx
            netmask 255.255.255.0
            broadcast xxx.xxx.xxx.255
            gateway xxx.xxx.xxx.254
            bridge_ports    eth0
            bridge_stp      off
            bridge_maxwait  0
            bridge_fd       0
            # dns-* options are implemented by the resolvconf package, if installed
            dns-nameservers xxx.xxx.xxx.xxx
    
  • 完成 bridge 設定後, 在 pcX 上重新啟動網路
    # sudo /etc/init.d/networking restart
    
  • 建立 OpenNebula Virtual Network 的管理方式: Public & Ranged
    # sudo vim /home/one/etc/public.net
    
  • Public: 固定 IP
    NAME = "Public"
    TYPE = FIXED
    
    #We have to bind this network to ''virbr1'' for Internet Access
    BRIDGE = br0
    
    LEASES = [IP=192.168.100.1]
    LEASES = [IP=192.168.100.2]
    LEASES = [IP=192.168.100.3]
    
    # sudo vim /home/one/etc/range.net
    
  • Range: 給定範圍 IP
    NAME = "Range"
    TYPE = RANGED
    
    #Now we'll use the cluster private network (physical)
    BRIDGE = br0
    
    NETWORK_SIZE    = C
    NETWORK_ADDRESS = 192.168.0.0
    
  • 編輯虛擬機器 xp 的設定檔
    # cd /home/domains
    # gedit xp.one
    
    NAME   = xp
    CPU    = 1
    MEMORY = 1024
    
    OS = [ boot = hd ]
    
    DISK = [ source   = /var/lib/libvirt/images/xp.img,
             clone    = no,
             target   = hda,
             readonly = no ]
    
    GRAPHICS = [ type ="vnc",
                 listen ="127.0.0.1",
                 port = "5901" ]
    
    NIC = [ network = "Public"]
    
  • 使用 ONE 來開啟虛擬機器 xp, 並指定佈署到實體機器 pcX
    # onevm create xp.one ; onevm deploy xp pcX
    
    # onevm list
      ID     NAME STAT CPU     MEM        HOSTNAME         TIME
       1      xp runn   0  131072           pcX  00 00:04:21
    
    # onevm show xp
    
  • onevm list 的資訊
    • ID ONE VM identifier
    • NAME Name of the ONE
    • STAT Status of the VM
    • CPU CPU percentage used by the VM
    • MEM Memory used by the VM
    • HOSTNAME Host where the VM is being or was run
    • TIME Time since the submission of the VM (days hours:minutes:seconds
  • 用 VNC viewer 登進去 xp 桌面, 或是用 virt-manager 開啟登入亦可
    $ vncviewer 127.0.0.1:5901
    

【Step 10: 進階使用 - Live Migration】

  • 可隨時在 pcX 上執行 onevm list 來得知目前虛擬機器各項狀態
  • Live Migration時, pcX 和 pcY 的時間需一致 (兩台皆需先執行以下指令)
    $ sudo ntpdate time.stdtime.gov.tw ; sudo hwclock -w
    
  • 設定 Shared Storage (本練習使用 NFS ,pcX-> NFS Server, pcY-> NFS Client)
    $ sudo apt-get install nfs-kernel-server 
    $ sudo gedit /etc/exports
    
  • @pcX 上面設定要分享給哪些台clients & 目錄
    /home/domains  pcY_IP(rw,sync,no_subtree_check,no_root_squash)
    /home/one  pcY_IP(rw,sync,no_subtree_check,no_root_squash)
    
  • @pcX 重新啟動 NFS server & 檢視 NFS shared docs
    $ sudo /etc/init.d/nfs-kernel-server restart
    $ sudo showmount -e localhost
    
  • @pcY 安裝所需套件並去掛載shared docs
    $ sudo apt-get install nfs-common
    (安裝 NFS 需要的套件)
    
    $ sudo mkdir /home/domains
    $ sudo mount.nfs pcX:/home/domains /home/domains
    (掛載共享目錄)
    
    $ sudo mkdir /home/one
    $ sudo mount.nfs pcX:/home/one/ /home/one
    
    $ mount
    
  • @pcX 檢查是否已經有順利被 pcY mount 起來
    $ sudo showmount -a
    
  • 將虛擬機器 xp 從 pcX migrate 到 pcY & 檢查虛擬機器狀態
    # onevm livemigrate xp pcY
    # onevm list
    

【Reference】

Attachments (17)

Download all attachments as: .zip