Object 스토리지는 cinder와는 다르게 디렉토리 계층 구조와는 다르게 컨테이너라는 단위를 개별 사용자에게 제공
컨테이너에 파일을 저장하는 서비스로 코드명은 swift이다.
구성요소들은 각각 컨트롤러 노드 및 object1, object2노드에 설치 된다.
우선 컨트롤러 노드부터 수행한다
swift 서비스는 SQL DB를 사용하지 않고 각각의 스토리지 노드에 분산된 SQLite DB를 사용한다.
그럼 swift 유저를 생성하고 admin 롤을 부여한다.
[root@controller ~]# openstack user create --domain default --password-prompt swiftUser Password:Repeat User Password:+-----------+----------------------------------+| Field | Value |+-----------+----------------------------------+| domain_id | default || enabled | True || id | 11d4a91e2c144db1b8c8dc57cc3d5758 || name | swift |+-----------+----------------------------------+
[root@controller ~]# openstack role add --project service --user swift admin
swift 서비스 및 서비스 API endpoint를 만든다.
[root@controller ~]# openstack service create --name swift --description "OpenStack Object Storage" object-store
+-------------+----------------------------------+| Field | Value |+-------------+----------------------------------+| description | OpenStack Object Storage || enabled | True || id | 671ddd2bc6d74e309c1007ba84643da3 || name | swift || type | object-store |+-------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(tenant_id\)s
+--------------+----------------------------------------------+
| Field | Value |
+--------------+----------------------------------------------+
| enabled | True |
| id | 6cab534b2c284cc7bfa50ae953286c03 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 671ddd2bc6d74e309c1007ba84643da3 |
| service_name | swift |
| service_type | object-store |
| url | http://controller:8080/v1/AUTH_%(tenant_id)s |
+--------------+----------------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(tenant_id\)s
+--------------+----------------------------------------------+
| Field | Value |
+--------------+----------------------------------------------+
| enabled | True |
| id | ed143ae6ac584b1087851fe4a9acd9ad |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 671ddd2bc6d74e309c1007ba84643da3 |
| service_name | swift |
| service_type | object-store |
| url | http://controller:8080/v1/AUTH_%(tenant_id)s |
+--------------+----------------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 5cd990d72b914e7db019a492ef4c988d |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 671ddd2bc6d74e309c1007ba84643da3 |
| service_name | swift |
| service_type | object-store |
| url | http://controller:8080/v1 |
+--------------+----------------------------------+
패키지를 설치한다. swift는 다른 서비스와는 다르게 독립적으로 서비스를 수행 할수 있으며 그럴때는
별도의 인증서비스가 필요로 한다. 다만 여기서는 기본 인증 서비스를 사용하기 때문에 컨트롤로 노드에 설치하며
keystone과 memcache 과 같은 패키지는 기존에 이미 설치 되어 있음
[root@controller ~]# yum install -y openstack-swift-proxy python-swiftclient \ python-keystoneclient python-keystonemiddleware \
memcached
Loaded plugins: fastestmirrorLoading mirror speeds from cached hostfile * base: centos.mirror.cdnetworks.com * extras: centos.mirror.cdnetworks.com * updates: centos.mirror.cdnetworks.comPackage python-swiftclient-2.6.0-1.el7.noarch already installed and latest versionPackage 1:python-keystoneclient-1.7.2-1.el7.noarch already installed and latest versionPackage python-keystonemiddleware-2.3.1-1.el7.noarch already installed and latest versionPackage memcached-1.4.15-9.el7.x86_64 already installed and latest versionResolving Dependencies--> Running transaction check---> Package openstack-swift-proxy.noarch 0:2.5.0-1.el7 will be installed
...(중략)...
python-dns.noarch 0:1.12.0-1.20150617git465785f.el7
python-pyeclib.x86_64 0:1.0.7-2.el7
Complete!
[root@controller ~]#
Object Storage source repository 에서 proxy service 설정파일을 가져온다.
[root@controller ~]# curl -o /etc/swift/proxy-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/proxy-server.conf-sample?h=stable/liberty % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 29655 100 29655 0 0 20243 0 0:00:01 0:00:01 --:--:-- 20242[root@controller ~]#
/etc/swift/proxy-server.conf 파일을 아래와 같이 수정한다.
[DEFAULT] 섹션에서 bind port, user, directory를 설정한다.
[pipeline:main] 섹션에서 appropriate modules을 활성화 한다.
[app:proxy-server] 섹션에서 자동 계정 생성을 활성화 한다.
[filter:keystoneauth] 섹션에서 operator로 지정할 유저를 설정한다.
[filter:authtoken] 섹션에서 인증서비스 설정을 한다. SWIFT_PASS 는 설정한 패스워드로 변경한다. 해당 섹션은 아래 내용외에는 모두 삭제한다.
[filter:cache] 섹션에서 memcached 가 설치된 위치를 설정한다.
[root@controller ~]# vi /etc/swift/proxy-server.conf
[DEFAULT]
...
bind_port = 8080
user = swift
swift_dir = /etc/swift
...
[pipeline:main]
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
...
[app:proxy-server]
use = egg:swift#proxy
...
account_autocreate = true
...
[filter:keystoneauth]
use = egg:swift#keystoneauth
...
operator_roles = admin,user
...
[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = swift
password = SWIFT_PASS
delay_auth_decision = true
...
[filter:cache]
use = egg:swift#memcache
...
memcache_servers = 127.0.0.1:11211
스토리지 노드(object노드) 설정전에 host박스로 접근
swift는 따로 raid를 구성하지 않고 xfs로 포맷된 개별의 디스크를 사용하며 현재 과정에선
cinder때와 같이 스토리지 노드에 디스크를 먼저 붙여준다.
[root@localhost ~]# qemu-img create -f qcow2 /home/obj1disk1.qcow2 100GFormatting '/home/obj1disk1.qcow2', fmt=qcow2 size=107374182400 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16[root@localhost ~]# qemu-img create -f qcow2 /home/obj1disk2.qcow2 100GFormatting '/home/obj1disk2.qcow2', fmt=qcow2 size=107374182400 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16[root@localhost ~]# qemu-img create -f qcow2 /home/obj2disk1.qcow2 100GFormatting '/home/obj2disk1.qcow2', fmt=qcow2 size=107374182400 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16[root@localhost ~]# qemu-img create -f qcow2 /home/obj2disk2.qcow2 100GFormatting '/home/obj2disk2.qcow2', fmt=qcow2 size=107374182400 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
[root@localhost ~]#
[root@localhost ~]# chown qemu:qemu /home/obj1disk1.qcow2
[root@localhost ~]# chown qemu:qemu /home/obj1disk2.qcow2
[root@localhost ~]# chown qemu:qemu /home/obj2disk1.qcow2
[root@localhost ~]# chown qemu:qemu /home/obj2disk2.qcow2
[root@localhost ~]# virsh pool-refresh home
Pool home refreshed
[root@localhost ~]# virsh vol-list home
Name Path
------------------------------------------------------------------------------
blockdisk.qcow2 /home/blockdisk.qcow2
...(중략)...
obj1disk1.qcow2 /home/obj1disk1.qcow2
obj1disk2.qcow2 /home/obj1disk2.qcow2
obj2disk1.qcow2 /home/obj2disk1.qcow2
obj2disk2.qcow2 /home/obj2disk2.qcow2
...(중략)...
[root@localhost ~]# vi /etc/libvirt/qemu/obj1disk1.xml
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/home/obj1disk1.qcow2'/>
<target dev='vdb'/>
</disk>
[root@localhost ~]# virsh attach-device --config object1 /etc/libvirt/qemu/obj1disk1.xml --live
Device attached successfully
[root@localhost ~]# vi /etc/libvirt/qemu/objt1disk2.xml
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/home/obj1disk2.qcow2'/>
<target dev='vdc'/>
</disk>
[root@localhost ~]# virsh attach-device --config object1 /etc/libvirt/qemu/objt1disk2.xml --live
Device attached successfully
[root@localhost ~]# vi /etc/libvirt/qemu/obj2diskt1.xml
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/home/obj2disk1.qcow2'/>
<target dev='vdb'/>
[root@localhost ~]# virsh attach-device --config object2 /etc/libvirt/qemu/obj2diskt1.xml --live
Device attached successfully
[root@localhost ~]# vi /etc/libvirt/qemu/obj2disk2.xml
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/home/obj2disk2.qcow2'/>
<target dev='vdc'/>
</disk>
[root@localhost ~]# virsh attach-device --config object2 /etc/libvirt/qemu/obj2disk2.xml --live
Device attached successfully
object1 노드로 이동한다.
현재 작업은 object1에서만 진행하지만 object2에서도 같은 작업을 진행한다.
먼저 디스크부터 확인한다.
참고로 물리 장비환경및 물리 디스크 사용했거나 또는 host에서 가상디스크를 scsi또는 sata bus로 붙인다면 /dev/sd? 이다.
현재 환경은 virtio bus를 통해 가상디스크를 붙여서 /dev/vd? 로 디바이스명이 정해진것
[root@object1 ~]# fdisk -l
Disk /dev/vda: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0007868c
Device Boot Start End Blocks Id System
/dev/vda1 * 2048 1026047 512000 83 Linux
/dev/vda2 1026048 209715199 104344576 8e Linux LVM
...(중략)...
Disk /dev/vdb: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/vdc: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
본격적인 패키지 설치 전에 필요 유틸리티를 설치한다. xfs는 7.0대의 centos/rhel의 기본 파일시스템이라 이미 설치되어 있을수 있음.
[root@object1 ~]# yum install -y xfsprogs rsyncLoaded plugins: fastestmirrorDetermining fastest mirrors * base: ftp.neowiz.com * extras: ftp.neowiz.com * updates: ftp.neowiz.comPackage xfsprogs-3.2.2-2.el7.x86_64 already installed and latest versionResolving Dependencies--> Running transaction check---> Package rsync.x86_64 0:3.0.9-17.el7 will be installed--> Finished Dependency Resolution
Installed:
rsync.x86_64 0:3.0.9-17.el7
Complete!
[root@object1 ~]#
2개의 디스크를 각각 xfs로 포맷한다.
[root@object1 ~]# mkfs.xfs /dev/vdb
meta-data=/dev/vdb isize=256 agcount=4, agsize=6553600 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=26214400, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal log bsize=4096 blocks=12800, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@object1 ~]# mkfs.xfs /dev/vdc
meta-data=/dev/vdc isize=256 agcount=4, agsize=6553600 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=26214400, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal log bsize=4096 blocks=12800, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
마운트 포인트를 생성한다.
[root@object1 ~]# mkdir -p /srv/node/vdb
[root@object1 ~]# mkdir -p /srv/node/vdc
/etc/fstab 수정 해당 마운트 정보를 추가하고 마운트를 수행한다.
[root@object1 ~]# vi /etc/fstab
...(중략)...
/dev/vdb /srv/node/vdb xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
/dev/vdc /srv/node/vdc xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
[root@object1 ~]# mount /srv/node/vdb
[root@object1 ~]# mount /srv/node/vdc
/etc/rsyncd.conf 파일을 수정하여 아래 내용을 추가한다.
address 항목에는 해당 스토리지 노드 IP를 입력한다. object1은 10.0.0.51, object2는 10.0.0.52
[root@object1 ~]# vi /etc/rsyncd.conf
...
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 10.0.0.51
[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock
[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock
[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock
rsyncd를 시작 및 서비스 등록한다.
[root@object1 ~]# systemctl enable rsyncd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/rsyncd.service to /usr/lib/systemd/system/rsyncd.service.
[root@object1 ~]# systemctl start rsyncd.service
swfit 패키지를 설치한다.
[root@object1 ~]# yum install -y openstack-swift-account openstack-swift-container openstack-swift-object
Loaded plugins: fastestmirrorLoading mirror speeds from cached hostfile * base: ftp.neowiz.com * extras: ftp.neowiz.com * updates: ftp.neowiz.comResolving Dependencies--> Running transaction check---> Package openstack-swift-account.noarch 0:2.5.0-1.el7 will be installed
...(중략)...
python-tempita.noarch 0:0.5.1-8.el7
python2-eventlet.noarch 0:0.17.4-4.el7
python2-greenlet.x86_64 0:0.4.9-1.el7
Complete!
[root@object1 ~]#
Object Storage source repository 에서 accounting, container, object service 설정파일을 가져온다.
[root@object1 ~]# curl -o /etc/swift/account-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/account-server.conf-sample?h=stable/liberty % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 6147 100 6147 0 0 6194 0 --:--:-- --:--:-- --:--:-- 6190[root@object1 ~]# curl -o /etc/swift/container-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/container-server.conf-sample?h=stable/liberty % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 6976 100 6976 0 0 7501 0 --:--:-- --:--:-- --:--:-- 7501[root@object1 ~]# curl -o /etc/swift/object-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/object-server.conf-sample?h=stable/liberty % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 11819 100 11819 0 0 13514 0 --:--:-- --:--:-- --:--:-- 13507
/etc/swift/account-server.conf 파일을 아래와 같이 수정한다.
[DEFAULT]섹션에서 bind IP address, bind port, user, configuration directory, mount point directory를 설정한다. bind_ip는 스토리지 노드의 IP를 입력한다.
[pipeline:main] 섹션에 모듈들을 활성화 시킨다.
[filter:recon]섹션에서 recon(meters) cache directory를 활성화 한다
[root@object1 ~]# vi /etc/swift/account-server.conf
[DEFAULT]
...
bind_ip = 10.0.0.51
bind_port = 6002
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = true
...
[pipeline:main]
pipeline = healthcheck recon account-server
...
[filter:recon]
use = egg:swift#recon
...
recon_cache_path = /var/cache/swift
/etc/swift/container-server.conf 파일도 /etc/swift/account-server.conf 파일과 같은 설정을 해준다. bind_port만 6001로 설정하면 된다.
[root@object1 ~]# vi /etc/swift/container-server.conf
[DEFAULT]
...
bind_ip = 10.0.0.51
bind_port = 6001
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = true
...
[pipeline:main]
pipeline = healthcheck recon container-server
...
[filter:recon]
use = egg:swift#recon
...
recon_cache_path = /var/cache/swift
/etc/swift/object-server.conf 파일도 수정하며 역시나 설정이 거의 비슷하다. bind_port가 6000이며 recon_lock_path 설정이 추가 된다.
[root@object1 ~]# vi /etc/swift/object-server.conf
[DEFAULT]
...
bind_ip = 10.0.0.51
bind_port = 6000
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = true
...
[pipeline:main]
pipeline = healthcheck recon object-server
...
[filter:recon]
use = egg:swift#recon
...
recon_cache_path = /var/cache/swift
recon_lock_path = /var/lock
swift 마운트 포인트의 권한 설정을 한다.
[root@object1 ~]# chown -R swift:swift /srv/node
recon 디렉토리를 생성하고 권한 설정을 한다.
[root@object1 ~]# mkdir -p /var/cache/swift[root@object1 ~]# chown -R root:swift /var/cache/swift
다시 컨트롤러 노드로 간다.
오브젝트 서비스를 시작하기 전에 스토리지 노드에 설치했던 서비스들 account, container, object 에 대해 ring을 만들어 주어야 한다.
ring builder는 각각의 노드들이 스토리지 아키텍처를 배포하기 위한 설정 파일을 만든다.
먼저 account ring을 만든다.
account server 서버는 account ring를 이용하여 container의 리스트를 관리한다.
/etc/swift 디렉토리로 이동하여 account.builder 파일을 만든다.
[root@controller swift]# swift-ring-builder account.builder create 10 3 1
다음의 command를 통해 각각의 노드를 링에 추가시킨다. 여기서 STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS 는
각 object노드들의 IP이며 DEVICE_NAME는 해당 노드의 disk device 이름이다. DEVICE_WEIGHT은 100을 입력한다.
# swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6002 \
--device DEVICE_NAME --weight DEVICE_WEIGHT
스토리지 노드에서 설정했던 4개의 disk를 각각 설정한다.
[root@controller swift]# swift-ring-builder account.builder add \ --region 1 --zone 1 --ip 10.0.0.51 --port 6002 --device vdb --weight 100
Device d0r1z1-10.0.0.51:6002R10.0.0.51:6002/vdb_"" with 100.0 weight got id 0[root@controller swift]# swift-ring-builder account.builder add \ --region 1 --zone 2 --ip 10.0.0.51 --port 6002 --device vdc --weight 100
Device d1r1z2-10.0.0.51:6002R10.0.0.51:6002/vdc_"" with 100.0 weight got id 1[root@controller swift]# swift-ring-builder account.builder add \ --region 1 --zone 3 --ip 10.0.0.52 --port 6002 --device vdb --weight 100
Device d2r1z3-10.0.0.52:6002R10.0.0.52:6002/vdb_"" with 100.0 weight got id 2[root@controller swift]# swift-ring-builder account.builder add \ --region 1 --zone 4 --ip 10.0.0.52 --port 6002 --device vdc --weight 100
Device d3r1z4-10.0.0.52:6002R10.0.0.52:6002/vdc_"" with 100.0 weight got id 3
링에 잘 추가 되었는지 확인한다.
[root@controller swift]# swift-ring-builder account.builder
account.builder, build version 4
1024 partitions, 3.000000 replicas, 1 regions, 4 zones, 4 devices, 100.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 1
The overload factor is 0.00% (0.000000)
Devices: id region zone ip address port replication ip replication port name weight partitions balance meta
0 1 1 10.0.0.51 6002 10.0.0.51 6002 vdb 100.00 0 -100.00
1 1 2 10.0.0.51 6002 10.0.0.51 6002 vdc 100.00 0 -100.00
2 1 3 10.0.0.52 6002 10.0.0.52 6002 vdb 100.00 0 -100.00
3 1 4 10.0.0.52 6002 10.0.0.52 6002 vdc 100.00 0 -100.00
링을 rebalance 시킨다.
[root@controller swift]# swift-ring-builder account.builder rebalance
Reassigned 1024 (100.00%) partitions. Balance is now 0.00. Dispersion is now 0.00
컨테이너 링과 오브젝트 링도 같은 단계를 거친다. account 명령어를 수정하여 사용할 경우 포트 및 build파일명을 정확히 확인해야 한다.
[root@controller swift]# swift-ring-builder container.builder create 10 3 1
[root@controller swift]#
[root@controller swift]# swift-ring-builder container.builder add \ --region 1 --zone 1 --ip 10.0.0.51 --port 6001 --device vdb --weight 100
Device d0r1z1-10.0.0.51:6002R10.0.0.51:6002/vdb_"" with 100.0 weight got id 0[root@controller swift]# swift-ring-builder container.builder add \ --region 1 --zone 2 --ip 10.0.0.51 --port 6001 --device vdc --weight 100
Device d1r1z2-10.0.0.51:6002R10.0.0.51:6002/vdc_"" with 100.0 weight got id 1[root@controller swift]# swift-ring-builder container.builder add \ --region 1 --zone 3 --ip 10.0.0.52 --port 6001 --device vdb --weight 100
Device d2r1z3-10.0.0.52:6002R10.0.0.52:6002/vdb_"" with 100.0 weight got id 2[root@controller swift]# swift-ring-builder container.builder add \ --region 1 --zone 4 --ip 10.0.0.52 --port 6001 --device vdc --weight 100
Device d3r1z4-10.0.0.52:6002R10.0.0.52:6002/vdc_"" with 100.0 weight got id 3[root@controller swift]#
[root@controller swift]# swift-ring-builder container.builder
container.builder, build version 4
1024 partitions, 3.000000 replicas, 1 regions, 4 zones, 4 devices, 100.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 1
The overload factor is 0.00% (0.000000)
Devices: id region zone ip address port replication ip replication port name weight partitions balance meta
0 1 1 10.0.0.51 6001 10.0.0.51 6001 vdb 100.00 0 -100.00
1 1 2 10.0.0.51 6001 10.0.0.51 6001 vdc 100.00 0 -100.00
2 1 3 10.0.0.52 6001 10.0.0.52 6001 vdb 100.00 0 -100.00
3 1 4 10.0.0.52 6001 10.0.0.52 6001 vdc 100.00 0 -100.00
[root@controller swift]#
[root@controller swift]# swift-ring-builder container.builder rebalance
Reassigned 1024 (100.00%) partitions. Balance is now 0.00. Dispersion is now 0.00
[root@controller swift]#
[root@controller swift]#
[root@controller swift]# swift-ring-builder object.builder create 10 3 1
[root@controller swift]#
[root@controller swift]# swift-ring-builder object.builder add \
--region 1 --zone 1 --ip 10.0.0.51 --port 6000 --device vdb --weight 100
Device d0r1z1-10.0.0.51:6002R10.0.0.51:6002/vdb_"" with 100.0 weight got id 0
[root@controller swift]# swift-ring-builder object.builder add \
--region 1 --zone 2 --ip 10.0.0.51 --port 6000 --device vdc --weight 100
Device d1r1z2-10.0.0.51:6002R10.0.0.51:6002/vdc_"" with 100.0 weight got id 1
[root@controller swift]# swift-ring-builder object.builder add \
--region 1 --zone 3 --ip 10.0.0.52 --port 6000 --device vdb --weight 100
Device d2r1z3-10.0.0.52:6002R10.0.0.52:6002/vdb_"" with 100.0 weight got id 2
[root@controller swift]# swift-ring-builder object.builder add \
--region 1 --zone 4 --ip 10.0.0.52 --port 6000 --device vdc --weight 100
Device d3r1z4-10.0.0.52:6002R10.0.0.52:6002/vdc_"" with 100.0 weight got id 3
[root@controller swift]#
[root@controller swift]# swift-ring-builder object.builder
object.builder, build version 4
1024 partitions, 3.000000 replicas, 1 regions, 4 zones, 4 devices, 100.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 1
The overload factor is 0.00% (0.000000)
Devices: id region zone ip address port replication ip replication port name weight partitions balance meta
0 1 1 10.0.0.51 6000 10.0.0.51 6000 vdb 100.00 0 -100.00
1 1 2 10.0.0.51 6000 10.0.0.51 6000 vdc 100.00 0 -100.00
2 1 3 10.0.0.52 6000 10.0.0.52 6000 vdb 100.00 0 -100.00
3 1 4 10.0.0.52 6000 10.0.0.52 6000 vdc 100.00 0 -100.00
[root@controller swift]#
[root@controller swift]# swift-ring-builder object.builder rebalance
Reassigned 1024 (100.00%) partitions. Balance is now 0.00. Dispersion is now 0.00
ring 구성 파일인 account.ring.gz, container.ring.gz및 object.ring.gz 파일이 잘 생성되었는지 확인 후
모든 스토리지 노드 및 모든 추가 proxy서버(proxy서버 추가한 경우)들의 /etc/swift(위에서 설정파일에 swift_dir로 지정)로 배포를 한다.
[root@controller swift]# lsaccount.builder container-reconciler.conf object.builder proxy-server.conf
account.ring.gz container.ring.gz object-expirer.conf swift.conf
backups container-server object.ring.gz
container.builder container-server.conf proxy-server
[root@controller swift]# scp *.gz object1:/etc/swift
The authenticity of host 'object1 (10.0.0.51)' can't be established.
ECDSA key fingerprint is 99:e1:15:2b:05:9c:89:6b:1a:63:1d:e6:0e:7a:09:6e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'object1,10.0.0.51' (ECDSA) to the list of known hosts.
account.ring.gz 100% 1441 1.4KB/s 00:00
container.ring.gz 100% 1451 1.4KB/s 00:00
object.ring.gz 100% 1428 1.4KB/s 00:00
[root@controller swift]# scp *.gz object2:/etc/swift
The authenticity of host 'object2 (10.0.0.52)' can't be established.
ECDSA key fingerprint is 99:e1:15:2b:05:9c:89:6b:1a:63:1d:e6:0e:7a:09:6e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'object2,10.0.0.52' (ECDSA) to the list of known hosts.
account.ring.gz 100% 1441 1.4KB/s 00:00
container.ring.gz 100% 1451 1.4KB/s 00:00
object.ring.gz 100% 1428 1.4KB/s 00:00
[root@controller swift]#
Object Storage source repository에서 /etc/swift/swift.conf 파일을 가져와 아래와 같이 수정한다.
[swift-hash] 섹션에서 hash 설정을 한다. 이때 HASH_PATH_SUFFIX와 HASH_PATH_PREFIX는 unique한 값으로 교체를 하며
"openssl rand -hex 10" 커맨드를 통해 랜덤값으로 입력해주는 것이 좋다. 단 이 값은 보안을 유지하며 변경 또는 삭제하지 않는다.
[storage-policy:0] 섹션에서 기본 스토리지 정책을 설정한다.
[root@controller ~]# curl -o /etc/swift/swift.conf \ https://git.openstack.org/cgit/openstack/swift/plain/etc/swift.conf-sample?h=stable/liberty % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 7200 100 7200 0 0 5204 0 0:00:01 0:00:01 --:--:-- 5206[root@controller ~]# vi /etc/swift/swift.conf
[swift-hash]
...
swift_hash_path_suffix = HASH_PATH_SUFFIX
swift_hash_path_prefix = HASH_PATH_PREFIX
...
[storage-policy:0]
...
name = Policy-0
default = yes
/etc/swift/swift.conf 파일을 모든 swift관련 노드로 배포를 한다.
[root@controller ~]# scp /etc/swift/swift.conf object1:/etc/swift/swift.conf
swift.conf 100% 7224 7.1KB/s 00:00
[root@controller ~]# scp /etc/swift/swift.conf object2:/etc/swift/swift.conf
swift.conf 100% 7224 7.1KB/s 00:00
swift 관련 모든 노드에서 /etc/swift 및 하위 경로의 소유권을 수정한다.
[root@controller ~]# chown -R root:swift /etc/swift
controller 또는 다른 proxy 서비스가 설치된 서버에서 proxy 서비스 를 시작 해주고 서비스로 통록한다.
[root@controller ~]# systemctl enable openstack-swift-proxy.service memcached.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-proxy.service to /usr/lib/systemd/system/openstack-swift-proxy.service.
[root@controller ~]# systemctl start openstack-swift-proxy.service memcached.service
스토리지 노드들에서 관련 서비스를 시작 및 서비스 등록해준다.(object1, obect2 동일하게 수행한다.)
[root@object1 ~]# systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service \ openstack-swift-account-reaper.service openstack-swift-account-replicator.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-account.service to /usr/lib/systemd/system/openstack-swift-account.service.Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-account-auditor.service to /usr/lib/systemd/system/openstack-swift-account-auditor.service.Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-account-reaper.service to /usr/lib/systemd/system/openstack-swift-account-reaper.service.Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-account-replicator.service to /usr/lib/systemd/system/openstack-swift-account-replicator.service.
[root@object1 ~]#
[root@object1 ~]# systemctl start openstack-swift-account.service openstack-swift-account-auditor.service \ openstack-swift-account-reaper.service openstack-swift-account-replicator.service
[root@object1 ~]#
[root@object1 ~]# systemctl enable openstack-swift-container.service \ openstack-swift-container-auditor.service openstack-swift-container-replicator.service \
openstack-swift-container-updater.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-container.service to /usr/lib/systemd/system/openstack-swift-container.service.Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-container-auditor.service to /usr/lib/systemd/system/openstack-swift-container-auditor.service.Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-container-replicator.service to /usr/lib/systemd/system/openstack-swift-container-replicator.service.Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-container-updater.service to /usr/lib/systemd/system/openstack-swift-container-updater.service.
[root@object1 ~]#
[root@object1 ~]# systemctl start openstack-swift-container.service \ openstack-swift-container-auditor.service openstack-swift-container-replicator.service \
openstack-swift-container-updater.service
[root@object1 ~]#
[root@object1 ~]# systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service \
openstack-swift-object-replicator.service openstack-swift-object-updater.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-object.service to /usr/lib/systemd/system/openstack-swift-object.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-object-auditor.service to /usr/lib/systemd/system/openstack-swift-object-auditor.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-object-replicator.service to /usr/lib/systemd/system/openstack-swift-object-replicator.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-object-updater.service to /usr/lib/systemd/system/openstack-swift-object-updater.service.
[root@object1 ~]#
[root@object1 ~]# systemctl start openstack-swift-object.service openstack-swift-object-auditor.service \
openstack-swift-object-replicator.service openstack-swift-object-updater.service
오브젝트 서비스가 잘 설치 되었는지 확인한다.
환경설정 스크립트에 identity API 버전 3 사용하여 object 스토리지 서비스 클라이언트 설정한다.
[root@controller ~]# echo "export OS_AUTH_VERSION=3" | tee -a admin-openrc.sh demo-openrc.sh
export OS_AUTH_VERSION=3
데모 유저 스크립트를 활성화 한다.
[root@controller ~]# source demo-openrc.sh
서비스 상태를 확인한다.
[root@controller ~]# swift stat
Account: AUTH_94f9c25aaa4246b0915afacca2d65c22
Containers: 0
Objects: 0
Bytes: 0
X-Put-Timestamp: 1457586763.62553
X-Timestamp: 1457586763.62553
X-Trans-Id: tx26363eefacfc4ee68bb3b-0056e1024b
Content-Type: text/plain; charset=utf-8
임시로 파일을 한번 올려본다. grance 설치시 내려 받은 파일을 올려본다.
[root@controller ~]# swift upload container1 cirros-0.3.4-x86_64-disk.img
cirros-0.3.4-x86_64-disk.img
컨테이너 리스트 및 상태를 확인한다. 파일 업로드 후 변경되었다.
[root@controller ~]# swift listcontainer1[root@controller ~]# swift stat Account: AUTH_94f9c25aaa4246b0915afacca2d65c22 Containers: 1 Objects: 2 Bytes: 26575872Containers in policy "policy-0": 1 Objects in policy "policy-0": 2 Bytes in policy "policy-0": 26575872 X-Account-Project-Domain-Id: default X-Timestamp: 1457586773.61689 X-Trans-Id: tx2126c29c9f3b400cb6539-0056e103b0 Content-Type: text/plain; charset=utf-8 Accept-Ranges: bytes
올렸던 파일을 다운 받아 본다.
[root@controller ~]# swift download container1 cirros-0.3.4-x86_64-disk.imgcirros-0.3.4-x86_64-disk.img [auth 0.366s, headers 0.429s, total 0.505s, 95.682 MB/s]