openstack liberty 버전
하드웨어 요구사항 및 네트워크 구성
네트워크 구성도
이를 기반으로 실제 시스템 구성
피지컬 서버 사양
CPU - Intel Xeon e5-2603v3 6core
RAM - 32GB dd4
DISK - sata 3 disk 1TB
ssd 120GB * 2ea(raid 0 구성 으로 OS 영역시 저장될 곳)
OS centos 7.2
일단 가상머신 설치를 위해 centos 7 버전을 설치
[root@localhost home]# wget http://ftp.daumkakao.com/centos/7/isos/x86_64/CentOS-7-x86_64-DVD-1511.iso
--2016-01-29 23:47:34-- http://ftp.daumkakao.com/centos/7/isos/x86_64/CentOS-7-x86_64-DVD-1511.iso
Resolving ftp.daumkakao.com (ftp.daumkakao.com)... 103.246.57.108
Connecting to ftp.daumkakao.com (ftp.daumkakao.com)|103.246.57.108|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4329570304 (4.0G) [application/x-iso9660-image]
Saving to: ‘CentOS-7-x86_64-DVD-1511.iso’
94% [====================================================================================================================> ] 4,073,544,588 9.65MB/s in 15m 0s
2016-01-30 00:02:39 (4.32 MB/s) - Connection closed at byte 4073544588. Retrying.
--2016-01-30 00:02:40-- (try: 2) http://ftp.daumkakao.com/centos/7/isos/x86_64/CentOS-7-x86_64-DVD-1511.iso
Connecting to ftp.daumkakao.com (ftp.daumkakao.com)|103.246.57.108|:80... connected.
HTTP request sent, awaiting response... 206 Partial Content
Length: 4329570304 (4.0G), 256025716 (244M) remaining [application/x-iso9660-image]
Saving to: ‘CentOS-7-x86_64-DVD-1511.iso’
100%[+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++=======>] 4,329,570,304 5.29MB/s in 33s
2016-01-30 00:03:13 (7.31 MB/s) - ‘CentOS-7-x86_64-DVD-1511.iso’ saved [4329570304/4329570304]
[root@localhost home]#
우선 Management network 생성하기 위해 vi로 아래와 같은 파일을 생성 및 define
[root@localhost networks]# vi managenetwork.xml
<network>
<name>managenetwork</name>
<forward dev='br0' mode='nat'>
<interface dev='br0'/> // 인터넷에 연결된 brige 인터페이스를 입력
</forward>
<bridge name='virbr1' stp='on' delay='0'/> // virbr0이 없으면 0으로 생성해도 무방
<domain name='managenetwork'/>
<ip address='10.0.0.1' netmask='255.255.255.0'>
</ip>
</network>
[root@localhost networks]# virsh net-define /etc/libvirt/qemu/networks/managenetwork.xml
Network managenetwork defined from /etc/libvirt/qemu/networks/managenetwork.xml
[root@localhost networks]#
[root@localhost networks]# virsh net-list --all
Name State Autostart Persistent
----------------------------------------------------------
managenetwork inactive no yes
storagenetwork active yes yes
[root@localhost networks]# virsh net-start managenetwork
Network managenetwork started
[root@localhost networks]# virsh net-autostart managenetwork
Network managenetwork marked as autostarted
[root@localhost networks]# virsh net-list --all
Name State Autostart Persistent
----------------------------------------------------------
managenetwork active yes yes
storagenetwork active yes yes
[root@localhost networks]#
네트워크 생성후 가상머신을 임시로 Minimul 버전으로 설치후 (과정 생략 - 다른건 Default지만
네트워크 디바이스는 2개 연결해주며 eth0은 managenetwork에 연결 eth1은 br0에 연결)
VM들의 Base Image로 사용할 OS 편집을 위해 DHCP IP 할당을 위해 가상머신의 mac address 확인
[root@localhost~]# virsh dumpxml centos7.0 | grep 'mac address'
<mac address='52:54:00:4a:8d:f2'/> // managenetwork NAT 네트워크에 연결된 eth0
<mac address='52:54:00:67:ff:88'/> // host의 br0에 연결된 eth1
IP를 확인후 managenetwork 수정 => DHCP항목에 mac주소및 할당 될 IP 설정
Name State Autostart Persistent
----------------------------------------------------------
managenetwork active yes yes
[root@localhost ~]# virsh net-edit managenetwork
...
중략
...
<dhcp>
<range start='10.0.0.2' end='10.0.0.254'/>
<host mac='52:54:00:4a:8d:f2' ip='10.0.0.5'/>
<host mac='52:54:00:4a:8d:11' ip='10.0.0.11'/> // 가상머신 생성시 정해진 mac IP로 변경하여 DHCP로 IP 할당
<host mac='52:54:00:4a:8d:31' ip='10.0.0.31'/>
<host mac='52:54:00:4a:8d:41' ip='10.0.0.41'/>
<host mac='52:54:00:4a:8d:51' ip='10.0.0.51'/>
<host mac='52:54:00:4a:8d:52' ip='10.0.0.52'/>
</dhcp>
VM 종료 및 네트워크 재설 후 가상머신을 다시 시작 후 연결상태 확인
[root@localhost ~]# virsh shutdown centos7.0
Domain centos7.0 is being shutdown
[root@localhost ~]# virsh net-destroy managenetwork
Network managenetwork destroyed
[root@localhost ~]# virsh net-start managenetwork
Network managenetwork started
[root@localhost ~]# virsh start centos7.0
Domain centos7.0 started
[root@localhost ~]# virsh domifaddr centos7.0
Name MAC address Protocol Address
-------------------------------------------------------------------------------
vnet0 52:54:00:4a:8d:f2 ipv4 10.0.0.5/24
[root@localhost ~]#
기본 baseimage를 가진 가상머신에 접근하여 베이스 이미지를 수정한다.
The authenticity of host '10.0.0.5 (10.0.0.5)' can't be established.
ECDSA key fingerprint is 99:e1:15:2b:05:9c:89:6b:1a:63:1d:e6:0e:7a:09:6e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.0.5' (ECDSA) to the list of known hosts.
root@10.0.0.5's password:
Last login: Sat Jan 30 02:27:11 2016 from 10.0.0.1
[root@localhost ~]# yum -y update
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: ftp.kaist.ac.kr
* extras: ftp.kaist.ac.kr
* updates: ftp.kaist.ac.kr
...
중략
...
같은 이미지를 사용하는 가상머신들끼리 접근이 편하게끔
ssh-keygen을 이용 ssh-key를 생성후 자신의 public키를 authorized_keys로 복사해준 후 시스템 종료
[root@localhost ~]#
[root@localhost ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx root@localhost.managenetwork
The key's randomart image is:
+--[ RSA 2048]----+
| .. . .o.|
| .o +o+|
| .. .Bo|
| .. .+oo|
| S .. .+|
| .. oE|
| . + |
| . .. . |
| . .o..|
+-----------------+
[root@localhost ~]# cd .ssh
[root@localhost ~]# cp id_rsa.pub authorized_keys
[root@localhost ~]# shutdown -h now
가상머신들이 사용할 이미지를 원본이미지를 통한 backing 파일로 생성
[root@localhost ~]# virsh domblklist centos7.0
Target Source
------------------------------------------------
vda /mnt/osvg/vol/ospool/linux.qcow2
[root@localhost ~]# cd /mnt/osvg/vol/ospool
[root@localhost ospool]# qemu-img create -b ./linux.qcow2 -f qcow2 ./controller.qcow2
Formatting './controller.qcow2', fmt=qcow2 size=107374182400 backing_file='./linuxbase.qcow2' encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
[root@localhost ospool]# qemu-img create -b ./linux.qcow2 -f qcow2 ./compute.qcow2
Formatting './compute.qcow2', fmt=qcow2 size=107374182400 backing_file='./linuxbase.qcow2' encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
[root@localhost ospool]# qemu-img create -b ./linux.qcow2 -f qcow2 ./block1.qcow2
Formatting './block1.qcow2', fmt=qcow2 size=107374182400 backing_file='./linuxbase.qcow2' encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
[root@localhost ospool]# qemu-img create -b ./linux.qcow2 -f qcow2 ./object1.qcow2
Formatting './object1.qcow2', fmt=qcow2 size=107374182400 backing_file='./linuxbase.qcow2' encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
[root@localhost ospool]# qemu-img create -b ./linux.qcow2 -f qcow2 ./object2.qcow2
Formatting './object2.qcow2', fmt=qcow2 size=107374182400 backing_file='./linuxbase.qcow2' encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
[root@localhost ospool]# chown qemu:qemu *.qcow2
[root@localhost ospool]# ls -alrt
total xxxxxxx
drwxr-xr-x 4 nfsnobody root 4096 Jan 23 20:16 ..
-rw-r--r-- 1 qemu qemu 3673030656 Jan 25 07:30 xxxxx.qcow2
-rw-r--r-- 1 qemu qemu 4676124672 Jan 25 07:30 xxxxx.qcow2
-rw-r--r-- 1 qemu qemu 3686268928 Jan 28 10:21 xxxxx.qcow2
-rw-r--r-- 1 qemu qemu 1436483584 Jan 30 14:57 linux.qcow2
-rw-r--r-- 1 qemu qemu 107390894592 Feb 1 05:19 xxxxx.qcow2
-rw-r--r-- 1 qemu qemu 4346937344 Feb 10 05:44 xxxxx.qcow2
-rw-r--r-- 1 qemu qemu 4289134592 Feb 16 09:27 xxxxx.qcow2
-rw-r--r-- 1 qemu qemu 198656 Feb 17 05:37 controller.qcow2
-rw-r--r-- 1 qemu qemu 198656 Feb 17 05:37 compute.qcow2
-rw-r--r-- 1 qemu qemu 198656 Feb 17 05:37 block1.qcow2
-rw-r--r-- 1 qemu qemu 198656 Feb 17 05:37 object1.qcow2
drwxr-xr-x 2 root root 4096 Feb 17 05:38 .
-rw-r--r-- 1 qemu qemu 198656 Feb 17 05:38 object2.qcow2
가상머신 생성을 위해 원본 가상머신의 xml 파일을 dump파일로 생성 후 복사 및 수정
이때 가상머신의 uuid와 max address들이 가상머신들끼리 겹치면 안되기 때문에 삭제 또는 적절히 수정해야함
[root@localhost qemu]# virsh dumpxml generic >> /etc/libvirt/qemu/dump.xml
[root@localhost qemu]# cp /etc/libvirt/qemu/dump.xml /etc/libvirt/qemu/controller.xml
[root@localhost qemu]# cp /etc/libvirt/qemu/dump.xml /etc/libvirt/qemu/compute.xml
[root@localhost qemu]# cp /etc/libvirt/qemu/dump.xml /etc/libvirt/qemu/block1.xml
[root@localhost qemu]# cp /etc/libvirt/qemu/dump.xml /etc/libvirt/qemu/object1.xml
[root@localhost qemu]# cp /etc/libvirt/qemu/dump.xml /etc/libvirt/qemu/object2.xml
[root@localhost qemu]# vim /etc/libvirt/qemu/controller.xml
<domain type='kvm'>
<name>controller</name> // 이름 수정
<uuid>5e26e120-f046-4c8a-bf56-dbd4b9ba8f3f</uuid> // 해당 내용은 실제 설정시는 uuid가 겹치면 안되기 때문에 삭제(define 할경우때 자동으로 생성)
<memory unit='GiB'>4</memory> // 메모리도 알아서 수정, 단위를 GiB로 수정후 변경하는게 편함
<currentMemory unit='GiB'>4</currentMemory> // 현재 설정에서는 controller 4G, comupte 8G, block1 2G, object 각 4G 할당 예정
<vcpu placement='static'>2</vcpu> // cpu는 각각 2, 6, 2, 2, 2 할당 예정
...(생략)...
<cpu mode='host-passthrough'> // 중첩 가상화를 위해 cpu mode를 host-passthrough 또는 host-model로 변경
...(생략)...
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/mnt/osvg/vol/ospool/controller.qcow2'/> // 기존에 생성한 disk img
<target dev='vda' bus='virtio'/>
<boot order='1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</disk>
...생략...
<interface type='network'> // eth0에 해당
<mac address='52:54:00:4a:8d:11'/> // 위의 management network에서 dhcp로 ip를 할당한 macaddress 설정
<source network='managenetwork'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<interface type='bridge'> // eth1에 해당, block1, object1, object2 노드는 해당 디바이스 삭제
<mac address='52:54:00:a6:f8:42'/> // mac address가 겹치면 안되기 때문에 알아서 수정하거나 삭제해도 무방
<source bridge='br0'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</interface>
compute, bloack1, object1, 2도 각각 설정후 define 및 실행시켜 준다.
[root@localhost qemu]# virsh define /etc/libvirt/qemu/controller.xml
Domain controller defined from /etc/libvirt/qemu/controller.xml
[root@localhost qemu]# virsh define /etc/libvirt/qemu/compute.xml
Domain compute defined from /etc/libvirt/qemu/compute.xml
[root@localhost qemu]# virsh define /etc/libvirt/qemu/block1.xml
Domain block1 defined from /etc/libvirt/qemu/block1.xml
[root@localhost qemu]# virsh define /etc/libvirt/qemu/object1.xml
Domain object1 defined from /etc/libvirt/qemu/object1.xml
[root@localhost qemu]# virsh define /etc/libvirt/qemu/object2.xml
Domain object2 defined from /etc/libvirt/qemu/object2.xml
[root@localhost qemu]# virsh list --all
Id Name State
----------------------------------------------------
- centos7.0 shut off
- compute shut off
- controller shut off
- juho shut off
- object1 shut off
- object2 shut off
[root@localhost qemu]# virsh start controller
Domain controller started
[root@localhost qemu]# virsh start compute
Domain compute started
[root@localhost qemu]# virsh start block1
Domain block1 started
[root@localhost qemu]# virsh start object1
Domain object1 started
[root@localhost qemu]# virsh start object2
Domain object2 started
[root@localhost qemu]# virsh list --all
Id Name State
----------------------------------------------------
4 controller running
5 compute running
6 block1 running
7 object1 running
8 object2 running
- centos7.0 shut off
[root@localhost qemu]# ping 10.0.0.11
PING 10.0.0.11 (10.0.0.11) 56(84) bytes of data.
64 bytes from 10.0.0.11: icmp_seq=1 ttl=64 time=0.193 ms
64 bytes from 10.0.0.11: icmp_seq=2 ttl=64 time=0.168 ms
^C
--- 10.0.0.11 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.168/0.180/0.193/0.018 ms
[root@localhost ospool]# ping 10.0.0.31
PING 10.0.0.31 (10.0.0.31) 56(84) bytes of data.
64 bytes from 10.0.0.31: icmp_seq=1 ttl=64 time=0.185 ms
^C
--- 10.0.0.31 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms
[root@localhost ospool]# ping 10.0.0.41
PING 10.0.0.41 (10.0.0.41) 56(84) bytes of data.
64 bytes from 10.0.0.41: icmp_seq=1 ttl=64 time=0.164 ms
^C
--- 10.0.0.41 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms
[root@localhost ospool]# ping 10.0.0.51
PING 10.0.0.51 (10.0.0.51) 56(84) bytes of data.
64 bytes from 10.0.0.51: icmp_seq=1 ttl=64 time=0.176 ms
^C
--- 10.0.0.51 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms
[root@localhost ospool]# ping 10.0.0.52
PING 10.0.0.52 (10.0.0.52) 56(84) bytes of data.
64 bytes from 10.0.0.52: icmp_seq=1 ttl=64 time=0.177 ms
64 bytes from 10.0.0.52: icmp_seq=2 ttl=64 time=0.183 ms
^C
--- 10.0.0.52 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.177/0.180/0.183/0.003 ms
https://kashyapc.fedorapeople.org/virt/lc-2012/snapshots-handout.html
'cluod' 카테고리의 다른 글
openstack liberty 설치 정리5 - OpenStack Identity (0) | 2016.02.25 |
---|---|
openstack liberty 설치 정리4 - SQLDB/NoSQLDB/Message queue 설치 (0) | 2016.02.24 |
openstack liberty 설치 정리3 - 오픈스택 packages 설치 (0) | 2016.02.24 |
openstack liberty 설치 정리2 - host 및 ntp 설정 (0) | 2016.02.24 |
OpenStack 아키텍쳐 (0) | 2015.04.04 |