Telemetry service 서비스는 일종의 모니터링 하는 서비스이며 코드명은 ceilometer이다.

설치는 우선 controller 노드부터한다.

 

다른 서비스와 마찬가지로 DB부터 설치하는데 다른 서비스와는 다르게 NoSQL인 mongoDB를 사용한다.

ceilometer database를 만든다. CEILOMETER_DBPASS는 사용할 패스워드로 적절하게 변경한다.

[root@controller ~]# mongo --host controller --eval '
   db = db.getSiblingDB("ceilometer");
   db.createUser({user: "ceilometer",
   pwd: "CEILOMETER_DBPASS",

   roles: [ "readWrite", "dbAdmin" ]})'
MongoDB shell version: 2.6.11
connecting to: controller:27017/test
Successfully added user: { "user" : "ceilometer", "roles" : [ "readWrite", "dbAdmin" ] }
[root@controller ~]#

 

ceilometer 유저를 만들고 admin 롤을 부여한다.

[root@controller ~]# openstack user create --domain default --password-prompt ceilometer
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | default                          |
| enabled   | True                             |
| id        | 594e06a2424b46fa848273811be23de2 |
| name      | ceilometer                       |
+-----------+----------------------------------+

[root@controller ~]# openstack role add --project service --user ceilometer admin
[root@controller ~]#

 

ceilometer 서비스를 만든다.

[root@controller ~]# openstack service create --name ceilometer \
   --description "Telemetry" metering
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Telemetry                        |
| enabled     | True                             |
| id          | 2ff3fef644e0427187b6d0799490c193 |
| name        | ceilometer                       |
| type        | metering                         |
+-------------+----------------------------------+

 

ceilometer 서비스 api endpoint를 만든다.

[root@controller ~]# openstack endpoint create --region RegionOne metering public http://controller:8777
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 9ae278fb7ff74faf989cc8388cb0b827 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 2ff3fef644e0427187b6d0799490c193 |
| service_name | ceilometer                       |
| service_type | metering                         |
| url          | http://controller:8777           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne metering internal http://controller:8777
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 2ed1d6a7dfa44580bb4738578cb12862 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 2ff3fef644e0427187b6d0799490c193 |
| service_name | ceilometer                       |
| service_type | metering                         |
| url          | http://controller:8777           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne metering admin http://controller:8777
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 22b448eeb52241dab95199b97e274e58 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 2ff3fef644e0427187b6d0799490c193 |
| service_name | ceilometer                       |
| service_type | metering                         |
| url          | http://controller:8777           |
+--------------+----------------------------------+

 

 

패키지를 설치한다.

[root@controller ~]# yum install -y openstack-ceilometer-api \
   openstack-ceilometer-collector openstack-ceilometer-notification \
   openstack-ceilometer-central openstack-ceilometer-alarm \
   python-ceilometerclient
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: centos.mirror.cdnetworks.com
 * extras: centos.mirror.cdnetworks.com
 * updates: www.ftp.ne.jp
Package python-ceilometerclient-1.5.0-1.el7.noarch already installed and latest version
Resolving Dependencies
--> Running transaction check
---> Package openstack-ceilometer-alarm.noarch 1:5.0.2-1.el7 will be installed

...(중략).. 

  unbound-libs.x86_64 0:1.4.20-26.el7             

  yajl.x86_64 0:2.0.4-4.el7

Complete!
[root@controller ~]#

 

 

/etc/ceilometer/ceilometer.conf 파일을 아래와 같이 수정한다.
[database] 섹션에서 DB 접근설정을 한다. CEILOMETER_DBPASS는 설정한 패스워드로 변경한다.
[DEFAULT] 와 [oslo_messaging_rabbit] 섹션에서 RabbitMQ 설정을 한다. RABBIT_PASS는 설정한 패스워드로 변경한다.
[DEFAULT] 와 [keystone_authtoken] 섹션에서 인증서비스 접근 설정을 한다. CEILOMETER_PASS는 설정한 패스워드로 변경한다.
[service_credentials] 섹션에서 서비스 작업증명 설정을 한다. CEILOMETER_PASS는 설정한 패스워드로 변경한다.

(옵션)[DEFAULT] 섹션에서 트러블슈팅시 도움이 될수 있게 verbose를 활성화한다.

[root@controller ~]# vi /etc/ceilometer/ceilometer.conf

[DEFAULT]
...
rpc_backend = rabbit
...
auth_strategy = keystone

...

 

 

[database]
...
connection = mongodb://ceilometer:CEILOMETER_DBPASS@controller:27017/ceilometer

...

 

 

[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS

...

 


[DEFAULT]

[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = ceilometer
password = CEILOMETER_PASS

...

 


[service_credentials]
...
os_auth_url = http://controller:5000/v2.0
os_username = ceilometer
os_tenant_name = service
os_password = CEILOMETER_PASS
os_endpoint_type = internalURL
os_region_name = RegionOne

 

서비스를 등록 및 시작한다.

[root@controller ~]# systemctl enable openstack-ceilometer-api.service \
   openstack-ceilometer-notification.service \
   openstack-ceilometer-central.service \
   openstack-ceilometer-collector.service \
   openstack-ceilometer-alarm-evaluator.service \
   openstack-ceilometer-alarm-notifier.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-ceilometer-api.service to /usr/lib/systemd/system/openstack-ceilometer-api.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-ceilometer-notification.service to /usr/lib/systemd/system/openstack-ceilometer-notification.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-ceilometer-central.service to /usr/lib/systemd/system/openstack-ceilometer-central.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-ceilometer-collector.service to /usr/lib/systemd/system/openstack-ceilometer-collector.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-ceilometer-alarm-evaluator.service to /usr/lib/systemd/system/openstack-ceilometer-alarm-evaluator.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-ceilometer-alarm-notifier.service to /usr/lib/systemd/system/openstack-ceilometer-alarm-notifier.service.
[root@controller ~]# systemctl start openstack-ceilometer-api.service \
   openstack-ceilometer-notification.service \
   openstack-ceilometer-central.service \
   openstack-ceilometer-collector.service \
   openstack-ceilometer-alarm-evaluator.service \
   openstack-ceilometer-alarm-notifier.service

 

 

 

이미지 서비스와의 연동을 위해 glance설정파일도 수정한다.

/etc/glance/glance-api.conf 와 /etc/glance/glance-registry.conf 파일을 아래와 같이 수정한다.

[DEFAULT] 와 [oslo_messaging_rabbit] 섹션에서 notifications and RabbitMQ 설정을 한다. RABBIT_PASS는 설정한 password로 변경한다.

[root@controller ~]# vi /etc/glance/glance-api.conf
[DEFAULT]

...
notification_driver = messagingv2
rpc_backend = rabbit

[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS

 

2개의 파일을 모두 위와 같은 설정을 한후 이미지 서비스를 재시작한다.

[root@controller ~]# systemctl restart openstack-glance-api.service openstack-glance-registry.service

 

다음으로 컴퓨트 서비스와 연동을 위해

각각의 컴퓨트 노트로 이동한다.

우선 패키지부터 설치한다.

[root@compute1 ~]# yum install -y openstack-ceilometer-compute python-ceilometerclient python-pecan
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: centos.mirror.cdnetworks.com
 * extras: centos.mirror.cdnetworks.com
 * updates: centos.mirror.cdnetworks.com
Package python2-pecan-1.0.2-2.el7.noarch already installed and latest version
Resolving Dependencies
--> Running transaction check
---> Package openstack-ceilometer-compute.noarch 1:5.0.2-1.el7 will be installed

...(중략)... 

  python-tooz.noarch 0:1.24.0-1.el7
  python-werkzeug.noarch 0:0.9.1-2.el7                  

  python2-jsonpath-rw-ext.noarch 0:0.1.7-1.1.el7

Complete!
[root@compute1 ~]#

 

/etc/ceilometer/ceilometer.conf 파일을 아래와 같이 수정한다.

[DEFAULT] 와 [oslo_messaging_rabbit] 섹션에 RabbitMQ 접근 설정을 한다. RABBIT_PASS는 설정된 패스워드로 변경한다.
[DEFAULT] 와 [keystone_authtoken] 섹션에 인증서비스 접근 설정을 한다. CEILOMETER_PASS는 설정된 패스워드로 변경한다.
[service_credentials] 섹션에 서비스 자격정보를 설정한다. CEILOMETER_PASS는 설정된 패스워드로 변경한다.
(옵션)[DEFAULT] 섹션에서 트러블슈팅시 도움이 될수 있게 verbose를 활성화한다.

[root@compute1 ~]# vi /etc/ceilometer/ceilometer.conf

[DEFAULT]
...
rpc_backend = rabbit
...
auth_strategy = keystone
...
verbose = True

...

 

 

[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
...


[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = ceilometer
password = CEILOMETER_PASS
...


[service_credentials]
...
os_auth_url = http://controller:5000/v2.0
os_username = ceilometer
os_tenant_name = service
os_password = CEILOMETER_PASS
os_endpoint_type = internalURL
os_region_name = RegionOne 

 

 

 

/etc/nova/nova.conf 파일의 [DEFAULT]섹션을 아래와 같이 수정한다.

[root@compute1 ~]# vi /etc/nova/nova.conf

[DEFAULT]
...
instance_usage_audit = True
instance_usage_audit_period = hour
notify_on_state_change = vm_and_task_state
notification_driver = messagingv2

 

ceilometer 서비스 등록 및 시작, compute 서비스 역시 변경사항이 있으니 재시작 해준다.

[root@compute1 ~]# systemctl enable openstack-ceilometer-compute.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-ceilometer-compute.service to /usr/lib/systemd/system/openstack-ceilometer-compute.service.
[root@compute1 ~]# systemctl start openstack-ceilometer-compute.service
[root@compute1 ~]# systemctl restart openstack-nova-compute.service

 

 

 

다음으로 block storage 서비스와 연동을 위해

각각의 block1 노드로 이동한다.

우선 설정 파일을 수정한다.

/etc/cinder/cinder.conf 파일의 [DEFAULT]섹션에 아래와 같이 notification를 설정한다.

[root@block1 ~]# vi /etc/cinder/cinder.conf

[DEFAULT]
...

notification_driver = messagingv2

 

서비스 재시작을 위해 컨트롤러 및 block1노드의 서비스들을 각각 재시작 해준다.

[root@controller ~]# systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service

 

[root@block1 ~]# systemctl restart openstack-cinder-volume.service

 

 

다음으로 block storage 서비스와 연동을 설정한다.

Telemetry service 서비스는 Object Storage service 에 접근하는데 ResellerAdmin 롤을 사용한다.

우선 컨트롤로 노드에서 ResellerAdmin 롤을 만들고 ResellerAdmin 롤에 ceilometer 유저를 추가한다.

[root@controller ~]# openstack role create ResellerAdmin
+-------+----------------------------------+
| Field | Value                            |
+-------+----------------------------------+
| id    | d77ca941e0324c6c98d69724e8bffbaf |
| name  | ResellerAdmin                    |
+-------+----------------------------------+

[root@controller ~]# openstack role add --project service --user ceilometer ResellerAdmin

 

다음으로 설치는 컨트롤러 노드 또는 오브젝트 proxy서비스가 실행되는 서버에 진행한다.

/etc/swift/proxy-server.conf파일을 아래와 같이 수정한다.

[filter:keystoneauth] 섹션에 ResellerAdmin 롤을 추가한다.
[pipeline:main] 섹션에 ceilometer를 추가한다.
[filter:ceilometer] 섹션에 notifications 설정을 한다. RABBIT_PASS는 설정한 패스워드로 변경한다.

[root@controller ~]# vi /etc/swift/proxy-server.conf

[filter:keystoneauth]
...
operator_roles = admin, user, ResellerAdmin
...


[pipeline:main]
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache
container_sync bulk ratelimit authtoken keystoneauth container-quotas
account-quotas slo dlo versioned_writes proxy-logging ceilometer
proxy-server
...


[filter:ceilometer]
paste.filter_factory = ceilometermiddleware.swift:filter_factory
...
control_exchange = swift
url = rabbit://openstack:RABBIT_PASS@controller:5672/
driver = messagingv2
topic = notifications
log_level = WARN

 

스위프트 프록시 서비스를 재시작한다.

[root@controller ~]# systemctl restart openstack-swift-proxy.service

 

 

 

설치가 잘되었는지 확인한다.

[root@controller ~]# ceilometer meter-list
+-------------+-------+-------+--------------------------------------+---------+----------------------------------+
| Name        | Type  | Unit  | Resource ID                          | User ID | Project ID                       |
+-------------+-------+-------+--------------------------------------+---------+----------------------------------+
| image       | gauge | image | 49338c63-033c-40a3-abdd-d6410799de24 | None    | 94f9c25aaa4246b0915afacca2d65c22 |
| image.size  | gauge | B     | 49338c63-033c-40a3-abdd-d6410799de24 | None    | 94f9c25aaa4246b0915afacca2d65c22 |
+-------------+-------+-----------+----------------------------------+---------+----------------------------------+

 

 

Image service에서 CirrOS를 다운 받고 meter list를 다시 확인하여 다운되는 것을 모니터링이 잘되는지 확인한다.

[root@controller ~]# IMAGE_ID=$(glance image-list | grep 'cirros' | awk '{ print $2 }')
[root@controller ~]# glance image-download $IMAGE_ID > /tmp/cirros.img
[root@controller ~]# ceilometer meter-list
+----------------+-------+-------+--------------------------------------+----------------------------------+----------------------------------+

| Name           | Type  | Unit  | Resource ID                          | User ID                          | Project ID                       |
+----------------+-------+-------+--------------------------------------+----------------------------------+----------------------------------+
| image          | gauge | image | 49338c63-033c-40a3-abdd-d6410799de24 | 2238ec4daed3436b8cc97491518bd6cf | 94f9c25aaa4246b0915afacca2d65c22 |
| image.download | delta | B     | 49338c63-033c-40a3-abdd-d6410799de24 | 2238ec4daed3436b8cc97491518bd6cf | 94f9c25aaa4246b0915afacca2d65c22 |
| image.serve    | delta | B     | 49338c63-033c-40a3-abdd-d6410799de24 | 2238ec4daed3436b8cc97491518bd6cf | 94f9c25aaa4246b0915afacca2d65c22 |
| image.size     | gauge | B     | 49338c63-033c-40a3-abdd-d6410799de24 | 2238ec4daed3436b8cc97491518bd6cf | 94f9c25aaa4246b0915afacca2d65c22 |
+----------------+-------+-------+--------------------------------------+----------------------------------+----------------------------------+

 

 

image.download 미터에서 사용 통계를 검색한다.

[root@controller ~]# ceilometer statistics -m image.download -p 60
+--------+----------------------------+----------------------------+------------+------------+------------+------------+-------+----------+----------------------------+----------------------------+
| Period | Period Start               | Period End                 | Max        | Min        | Avg        | Sum        | Count | Duration | Duration Start             | Duration End               |
+--------+----------------------------+----------------------------+------------+------------+------------+------------+-------+----------+----------------------------+----------------------------+
| 60     | 2016-03-12T06:01:39.451000 | 2016-03-12T06:02:39.451000 | 13287936.0 | 13287936.0 | 13287936.0 | 13287936.0 | 1     | 0.0      | 2016-03-12T06:01:59.874000 | 2016-03-12T06:01:59.874000 |
+--------+----------------------------+----------------------------+------------+------------+------------+------------+-------+----------+----------------------------+----------------------------+

 

임시로 다운 받은 cirros 이미지를 삭제한다.

[root@controller ~]# rm -f /tmp/cirros.img

Orchestration service서비스는 인스턴스를 생성시 인스턴스에 대한 설정치를 일일히 입력하지 않고

자동화 시키는 서비스이며 코드명은 heat이다.

 

설치는 컨트롤러 노드에 하며 역시나 DB 부터 생성한다.

[root@controller ~]# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 26
Server version: 5.5.44-MariaDB MariaDB Server

 

Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.

 

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

 

MariaDB [(none)]> CREATE DATABASE heat;
Query OK, 1 row affected (0.00 sec)

 

MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' \
   IDENTIFIED BY 'HEAT_DBPASS';
Query OK, 0 rows affected (0.00 sec)

 

MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'controller'\
   IDENTIFIED BY 'HEAT_DBPASS';

Query OK, 0 rows affected (0.00 sec)

 

MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' \
   IDENTIFIED BY 'HEAT_DBPASS';
Query OK, 0 rows affected (0.00 sec)

 

MariaDB [(none)]> quit
Bye

 

heat 유저를 생성하고 admin 롤을 부여한다.

[root@controller ~]# openstack user create --domain default --password-prompt heat
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | default                          |
| enabled   | True                             |
| id        | 970d758f242944be9eb34477786acfc5 |
| name      | heat                             |
+-----------+----------------------------------+
[root@controller ~]# openstack role add --project service --user heat admin

 

heat와 heat-cfn에 대한 서비스를 만든다.

[root@controller ~]# openstack service create --name heat \
   --description "Orchestration" orchestration
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Orchestration                    |
| enabled     | True                             |
| id          | 7b3ac90bc9524fab9367dff629b2522b |
| name        | heat                             |
| type        | orchestration                    |
+-------------+----------------------------------+
[root@controller ~]# openstack service create --name heat-cfn \
   --description "Orchestration"  cloudformation
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Orchestration                    |
| enabled     | True                             |
| id          | eab5fc4507644a37a0b79b6bce433470 |
| name        | heat-cfn                         |
| type        | cloudformation                   |
+-------------+----------------------------------+

 

 

2개의 서비스에 대해 각각 인터널, 퍼블릭, admin 접근 API 엔드포인트를 만든다.

[root@controller ~]# openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | bf13be32e29246dd9c5299f4ee4352e9        |
| interface    | public                                  |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 7b3ac90bc9524fab9367dff629b2522b        |
| service_name | heat                                    |
| service_type | orchestration                           |
| url          | http://controller:8004/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | 14b6480a837647779df5e4d5235e8b11        |
| interface    | internal                                |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 7b3ac90bc9524fab9367dff629b2522b        |
| service_name | heat                                    |
| service_type | orchestration                           |
| url          | http://controller:8004/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | 76518471bb614f71874b11dd275a719e        |
| interface    | admin                                   |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 7b3ac90bc9524fab9367dff629b2522b        |
| service_name | heat                                    |
| service_type | orchestration                           |
| url          | http://controller:8004/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]#
[root@controller ~]#
[root@controller ~]# openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | eaee96fd7fb34e9d919849f8cee3db49 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | eab5fc4507644a37a0b79b6bce433470 |
| service_name | heat-cfn                         |
| service_type | cloudformation                   |
| url          | http://controller:8000/v1        |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | a40350b0229f4680b259bd711813c9ef |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | eab5fc4507644a37a0b79b6bce433470 |
| service_name | heat-cfn                         |
| service_type | cloudformation                   |
| url          | http://controller:8000/v1        |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 72fdae5aec2f464f858ac1cff94fc146 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | eab5fc4507644a37a0b79b6bce433470 |
| service_name | heat-cfn                         |
| service_type | cloudformation                   |
| url          | http://controller:8000/v1        |
+--------------+----------------------------------+

 

오케스트레이션 서비스는 스택관리를 위해 추가로 작업해줄 것이 있다.

일단 heat라는 별도의 도메인을 만든다.

[root@controller ~]# openstack domain create --description "Stack projects and users" heat
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Stack projects and users         |
| enabled     | True                             |
| id          | df7ac09d39e54d6198acd3fd213ea43d |
| name        | heat                             |
+-------------+----------------------------------+ 

 

heat_domain_admin 라는 heat 도메인의 관리자를 만들고 admin 롤을 부여한다.

[root@controller ~]# openstack user create --domain heat --password-prompt heat_domain_admin
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | df7ac09d39e54d6198acd3fd213ea43d |
| enabled   | True                             |
| id        | e656c268bea8414a9c76574762c6ffa0 |
| name      | heat_domain_admin                |
+-----------+----------------------------------+

 [root@controller ~]# openstack role add --domain heat --user heat_domain_admin admin

 

heat_stack_owner 롤을 만들고 기존에 사용하던 demo계정에 heat_stack_owner 롤을 부여한다

[root@controller ~]# openstack role create heat_stack_owner
+-------+----------------------------------+
| Field | Value                            |
+-------+----------------------------------+
| id    | f09abe94c4b64b20bd49ff9b45a61cf5 |
| name  | heat_stack_owner                 |
+-------+----------------------------------+

[root@controller ~]# openstack role add --project demo --user demo heat_stack_owner

 

heat_stack_user 롤을 만든다. 오케스트레이션 서비스는 자동으로 스택 배포 하는 동안 만들어지는 사용자에 게 heat_stack_user 역할을 할당 한다.

이 롤은 기본적으로이 API 작업이 제한되어 있고 충돌을 피하기 위해, heat_stack_owner을 가진 사용자에게 이 롤을 추가 하지 않아야 한다.

[root@controller ~]# openstack role create heat_stack_user
+-------+----------------------------------+
| Field | Value                            |
+-------+----------------------------------+
| id    | 84768c6edb9c4689b0314f5f7785ff0e |
| name  | heat_stack_user                  |
+-------+----------------------------------+

 

 

이제 패키지를 설치한다.

[root@controller ~]# yum install -y openstack-heat-api openstack-heat-api-cfn openstack-heat-engine python-heatclient
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: centos.mirror.cdnetworks.com
 * extras: centos.mirror.cdnetworks.com
 * updates: www.ftp.ne.jp

Package python-heatclient-0.8.0-1.el7.noarch already installed and latest version
Resolving Dependencies
--> Running transaction check

...(중략)...

Dependency Installed:
  openstack-heat-common.noarch 1:5.0.0-1.el7                                                                             

  python-oslo-cache.noarch 0:0.7.0-1.el7

Complete!
[root@controller ~]#

 

/etc/heat/heat.conf 파일을 열어 아래와 같이 수정한다.

[database] 섹션에서 DB 접근설정을 한다. HEAT_DBPASS는 설정한 패스워드로 변경
[DEFAULT] 과 [oslo_messaging_rabbit] 섹션에서 RabbitMQ 설정을 한다. RABBIT_PASS는 설정한 패스워드로 변경
[keystone_authtoken], [trustee], [clients_keystone], [ec2authtoken] 섹션에서, 인증서비스 접근 설정을 한다. RABBIT_PASS는 설정한 패스워드로 변경
[DEFAULT] 섹션에서 metadata 와 wait condition URLs 설정을 한다.
[DEFAULT] 섹션에서 스택도메인과 관리자격증명 설정을 한다. HEAT_DOMAIN_PASS는 heat_domain_admin 유저의 패스워드로 변경

(옵션)[DEFAULT] 섹션에서 트러블슈팅시 도움이 될수 있게 verbose를 활성화한다.

[root@controller ~]# vi /etc/heat/heat.conf

[database]
...
connection = mysql://heat:HEAT_DBPASS@controller/heat
...
rpc_backend = rabbit
...
heat_metadata_server_url = http://controller:8000
heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
...
stack_domain_admin = heat_domain_admin
stack_domain_admin_password = HEAT_DOMAIN_PASS
stack_user_domain_name = heat

...
verbose = True

...


[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
...


[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = heat
password = HEAT_PASS
...


[trustee]
...
auth_plugin = password
auth_url = http://controller:35357
username = heat
password = HEAT_PASS
user_domain_id = default

[clients_keystone]
...
auth_uri = http://controller:5000
...


[ec2authtoken]
...
auth_uri = http://controller:5000

 

heat 서비스 DB의 table들을 생성한다.

[root@controller ~]# su -s /bin/sh -c "heat-manage db_sync" heat
2016-03-11 07:48:26.406 17856 INFO migrate.versioning.api [-] 27 -> 28...
2016-03-11 07:48:27.419 17856 INFO migrate.versioning.api [-] done
2016-03-11 07:48:27.419 17856 INFO migrate.versioning.api [-] 28 -> 29...

...(중략)...

 

 

 

서비스 등록 및 시작한다.

[root@controller ~]# systemctl enable openstack-heat-api.service \
   openstack-heat-api-cfn.service openstack-heat-engine.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-heat-api.service to /usr/lib/systemd/system/openstack-heat-api.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-heat-api-cfn.service to /usr/lib/systemd/system/openstack-heat-api-cfn.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-heat-engine.service to /usr/lib/systemd/system/openstack-heat-engine.service.
[root@controller ~]# systemctl start openstack-heat-api.service \
   openstack-heat-api-cfn.service openstack-heat-engine.service

 

서비스가 잘 설치되었는지 확인한다.

[root@controller ~]# heat service-list
+------------+-------------+--------------------------------------+------------+--------+----------------------------+--------+
| hostname   | binary      | engine_id                            | host       | topic  | updated_at                 | status |
+------------+-------------+--------------------------------------+------------+--------+----------------------------+--------+
| controller | heat-engine | 4f28997b-9b78-4b0b-95d1-49b85359c630 | controller | engine | 2016-03-10T23:09:17.000000 | up     |
| controller | heat-engine | 547feaa6-921d-4e02-a2f0-0ca11262ea20 | controller | engine | 2016-03-10T23:09:27.000000 | up     |
| controller | heat-engine | 58648238-5102-4b12-9047-579abce72a57 | controller | engine | 2016-03-10T23:09:27.000000 | up     |
| controller | heat-engine | 8d722bae-651b-4ceb-a81e-4c42ca6a5bd5 | controller | engine | 2016-03-10T23:09:17.000000 | up     |
+------------+-------------+--------------------------------------+------------+--------+----------------------------+--------+

 

Object 스토리지는 cinder와는 다르게 디렉토리 계층 구조와는 다르게 컨테이너라는 단위를 개별 사용자에게 제공

컨테이너에 파일을 저장하는 서비스로 코드명은 swift이다.

구성요소들은 각각 컨트롤러 노드 및 object1, object2노드에 설치 된다.

 

우선 컨트롤러 노드부터 수행한다

swift 서비스는 SQL DB를 사용하지 않고 각각의 스토리지 노드에 분산된 SQLite DB를 사용한다.

 

그럼 swift 유저를 생성하고 admin 롤을 부여한다.

[root@controller ~]# openstack user create --domain default --password-prompt swift
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | default                          |
| enabled   | True                             |
| id        | 11d4a91e2c144db1b8c8dc57cc3d5758 |
| name      | swift                            |
+-----------+----------------------------------+

[root@controller ~]# openstack role add --project service --user swift admin

 

swift 서비스 및 서비스  API endpoint를 만든다.

[root@controller ~]# openstack service create --name swift --description "OpenStack Object Storage" object-store
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Object Storage         |
| enabled     | True                             |
| id          | 671ddd2bc6d74e309c1007ba84643da3 |
| name        | swift                            |
| type        | object-store                     |
+-------------+----------------------------------+

[root@controller ~]# openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(tenant_id\)s
+--------------+----------------------------------------------+
| Field        | Value                                        |
+--------------+----------------------------------------------+
| enabled      | True                                         |
| id           | 6cab534b2c284cc7bfa50ae953286c03             |
| interface    | public                                       |
| region       | RegionOne                                    |
| region_id    | RegionOne                                    |
| service_id   | 671ddd2bc6d74e309c1007ba84643da3             |
| service_name | swift                                        |
| service_type | object-store                                 |
| url          | http://controller:8080/v1/AUTH_%(tenant_id)s |
+--------------+----------------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(tenant_id\)s
+--------------+----------------------------------------------+
| Field        | Value                                        |
+--------------+----------------------------------------------+
| enabled      | True                                         |
| id           | ed143ae6ac584b1087851fe4a9acd9ad             |
| interface    | internal                                     |
| region       | RegionOne                                    |
| region_id    | RegionOne                                    |
| service_id   | 671ddd2bc6d74e309c1007ba84643da3             |
| service_name | swift                                        |
| service_type | object-store                                 |
| url          | http://controller:8080/v1/AUTH_%(tenant_id)s |
+--------------+----------------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 5cd990d72b914e7db019a492ef4c988d |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 671ddd2bc6d74e309c1007ba84643da3 |
| service_name | swift                            |
| service_type | object-store                     |
| url          | http://controller:8080/v1        |
+--------------+----------------------------------+ 


 

 

패키지를 설치한다. swift는 다른 서비스와는 다르게 독립적으로 서비스를 수행 할수 있으며 그럴때는

별도의 인증서비스가 필요로 한다. 다만 여기서는 기본 인증 서비스를 사용하기 때문에 컨트롤로 노드에 설치하며

keystone과 memcache 과 같은 패키지는 기존에 이미 설치 되어 있음

 

[root@controller ~]# yum install -y openstack-swift-proxy python-swiftclient \
   python-keystoneclient python-keystonemiddleware \
   memcached
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: centos.mirror.cdnetworks.com
 * extras: centos.mirror.cdnetworks.com
 * updates: centos.mirror.cdnetworks.com
Package python-swiftclient-2.6.0-1.el7.noarch already installed and latest version
Package 1:python-keystoneclient-1.7.2-1.el7.noarch already installed and latest version
Package python-keystonemiddleware-2.3.1-1.el7.noarch already installed and latest version
Package memcached-1.4.15-9.el7.x86_64 already installed and latest version
Resolving Dependencies
--> Running transaction check
---> Package openstack-swift-proxy.noarch 0:2.5.0-1.el7 will be installed

...(중략)...

  python-dns.noarch 0:1.12.0-1.20150617git465785f.el7
  python-pyeclib.x86_64 0:1.0.7-2.el7

Complete!
[root@controller ~]# 

 

Object Storage source repository 에서 proxy service 설정파일을 가져온다.

[root@controller ~]# curl -o /etc/swift/proxy-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/proxy-server.conf-sample?h=stable/liberty
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 29655  100 29655    0     0  20243      0  0:00:01  0:00:01 --:--:-- 20242
[root@controller ~]#

 

 

 

/etc/swift/proxy-server.conf  파일을 아래와 같이 수정한다.

[DEFAULT] 섹션에서 bind port, user, directory를 설정한다.
[pipeline:main] 섹션에서  appropriate modules을 활성화 한다.
[app:proxy-server] 섹션에서 자동 계정 생성을 활성화 한다.
[filter:keystoneauth] 섹션에서 operator로 지정할 유저를 설정한다.
[filter:authtoken] 섹션에서 인증서비스 설정을 한다. SWIFT_PASS 는 설정한 패스워드로 변경한다. 해당 섹션은 아래 내용외에는 모두 삭제한다.
[filter:cache] 섹션에서 memcached 가 설치된 위치를 설정한다.

 

[root@controller ~]# vi /etc/swift/proxy-server.conf

[DEFAULT]
...
bind_port = 8080
user = swift
swift_dir = /etc/swift
...

 


[pipeline:main]
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
...

 


[app:proxy-server]
use = egg:swift#proxy
...
account_autocreate = true
...

 


[filter:keystoneauth]
use = egg:swift#keystoneauth
...
operator_roles = admin,user
...

 


[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = swift
password = SWIFT_PASS
delay_auth_decision = true
...

 


[filter:cache]
use = egg:swift#memcache
...
memcache_servers = 127.0.0.1:11211 

 

 

 

 

스토리지 노드(object노드) 설정전에 host박스로 접근

swift는 따로 raid를 구성하지 않고 xfs로 포맷된 개별의 디스크를 사용하며 현재 과정에선

cinder때와 같이 스토리지 노드에 디스크를 먼저 붙여준다.

[root@localhost ~]# qemu-img create -f qcow2 /home/obj1disk1.qcow2 100G
Formatting '/home/obj1disk1.qcow2', fmt=qcow2 size=107374182400 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
[root@localhost ~]# qemu-img create -f qcow2 /home/obj1disk2.qcow2 100G
Formatting '/home/obj1disk2.qcow2', fmt=qcow2 size=107374182400 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
[root@localhost ~]# qemu-img create -f qcow2 /home/obj2disk1.qcow2 100G
Formatting '/home/obj2disk1.qcow2', fmt=qcow2 size=107374182400 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
[root@localhost ~]# qemu-img create -f qcow2 /home/obj2disk2.qcow2 100G
Formatting '/home/obj2disk2.qcow2', fmt=qcow2 size=107374182400 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16

[root@localhost ~]#  

[root@localhost ~]# chown qemu:qemu /home/obj1disk1.qcow2
[root@localhost ~]# chown qemu:qemu /home/obj1disk2.qcow2
[root@localhost ~]# chown qemu:qemu /home/obj2disk1.qcow2
[root@localhost ~]# chown qemu:qemu /home/obj2disk2.qcow2

[root@localhost ~]# virsh pool-refresh home
Pool home refreshed

[root@localhost ~]# virsh vol-list home
 Name                 Path
------------------------------------------------------------------------------

 blockdisk.qcow2      /home/blockdisk.qcow2
...(중략)...

 obj1disk1.qcow2      /home/obj1disk1.qcow2
 obj1disk2.qcow2      /home/obj1disk2.qcow2
 obj2disk1.qcow2      /home/obj2disk1.qcow2
 obj2disk2.qcow2      /home/obj2disk2.qcow2
...(중략)...

 

[root@localhost ~]# vi /etc/libvirt/qemu/obj1disk1.xml

<disk type='file' device='disk'>
   <driver name='qemu' type='qcow2'/>
   <source file='/home/obj1disk1.qcow2'/>
   <target dev='vdb'/>
</disk>
[root@localhost ~]# virsh attach-device --config object1 /etc/libvirt/qemu/obj1disk1.xml --live
Device attached successfully

 

[root@localhost ~]# vi /etc/libvirt/qemu/objt1disk2.xml

<disk type='file' device='disk'>
   <driver name='qemu' type='qcow2'/>
   <source file='/home/obj1disk2.qcow2'/>
   <target dev='vdc'/>
</disk>
[root@localhost ~]# virsh attach-device --config object1 /etc/libvirt/qemu/objt1disk2.xml --live
Device attached successfully

 

[root@localhost ~]# vi /etc/libvirt/qemu/obj2diskt1.xml

<disk type='file' device='disk'>
   <driver name='qemu' type='qcow2'/>
   <source file='/home/obj2disk1.qcow2'/>
   <target dev='vdb'/>

[root@localhost ~]# virsh attach-device --config object2 /etc/libvirt/qemu/obj2diskt1.xml --live
Device attached successfully

 

[root@localhost ~]# vi /etc/libvirt/qemu/obj2disk2.xml

<disk type='file' device='disk'>
   <driver name='qemu' type='qcow2'/>
   <source file='/home/obj2disk2.qcow2'/>
   <target dev='vdc'/>
</disk>
[root@localhost ~]# virsh attach-device --config object2 /etc/libvirt/qemu/obj2disk2.xml --live
Device attached successfully

 

 

 

 

object1 노드로 이동한다.

현재 작업은 object1에서만 진행하지만 object2에서도 같은 작업을 진행한다.

먼저 디스크부터 확인한다.

참고로 물리 장비환경및 물리 디스크 사용했거나 또는 host에서 가상디스크를 scsi또는 sata bus로 붙인다면 /dev/sd? 이다.

현재 환경은 virtio bus를 통해 가상디스크를 붙여서 /dev/vd? 로 디바이스명이 정해진것

[root@object1 ~]# fdisk -l

Disk /dev/vda: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0007868c

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048     1026047      512000   83  Linux
/dev/vda2         1026048   209715199   104344576   8e  Linux LVM

...(중략)...


Disk /dev/vdb: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/vdc: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

 

 

본격적인 패키지 설치 전에 필요 유틸리티를 설치한다. xfs는 7.0대의 centos/rhel의 기본 파일시스템이라 이미 설치되어 있을수 있음.

[root@object1 ~]# yum install -y xfsprogs rsync
Loaded plugins: fastestmirror

Determining fastest mirrors
 * base: ftp.neowiz.com
 * extras: ftp.neowiz.com
 * updates: ftp.neowiz.com
Package xfsprogs-3.2.2-2.el7.x86_64 already installed and latest version
Resolving Dependencies
--> Running transaction check
---> Package rsync.x86_64 0:3.0.9-17.el7 will be installed
--> Finished Dependency Resolution

 

Installed:
  rsync.x86_64 0:3.0.9-17.el7

Complete!
[root@object1 ~]# 

 

2개의 디스크를 각각 xfs로 포맷한다.

[root@object1 ~]# mkfs.xfs /dev/vdb
meta-data=/dev/vdb               isize=256    agcount=4, agsize=6553600 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=26214400, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=12800, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@object1 ~]# mkfs.xfs /dev/vdc
meta-data=/dev/vdc               isize=256    agcount=4, agsize=6553600 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=26214400, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=12800, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

 

마운트 포인트를 생성한다.

[root@object1 ~]# mkdir -p /srv/node/vdb
[root@object1 ~]# mkdir -p /srv/node/vdc

 

 

/etc/fstab 수정 해당 마운트 정보를 추가하고 마운트를 수행한다.

[root@object1 ~]# vi /etc/fstab

...(중략)... 

/dev/vdb            /srv/node/vdb           xfs     noatime,nodiratime,nobarrier,logbufs=8  0 2
/dev/vdc            /srv/node/vdc           xfs     noatime,nodiratime,nobarrier,logbufs=8  0 2

 

[root@object1 ~]# mount /srv/node/vdb
[root@object1 ~]# mount /srv/node/vdc

 

/etc/rsyncd.conf 파일을 수정하여 아래 내용을 추가한다.

address 항목에는 해당 스토리지 노드 IP를 입력한다. object1은 10.0.0.51, object2는 10.0.0.52

[root@object1 ~]# vi /etc/rsyncd.conf

...

uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 10.0.0.51

 

[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock

 

[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock

 

[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock

 

rsyncd를 시작 및 서비스 등록한다.

[root@object1 ~]# systemctl enable rsyncd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/rsyncd.service to /usr/lib/systemd/system/rsyncd.service.
[root@object1 ~]# systemctl start rsyncd.service

 

swfit 패키지를 설치한다.

[root@object1 ~]# yum install -y openstack-swift-account openstack-swift-container openstack-swift-object
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: ftp.neowiz.com
 * extras: ftp.neowiz.com
 * updates: ftp.neowiz.com
Resolving Dependencies
--> Running transaction check
---> Package openstack-swift-account.noarch 0:2.5.0-1.el7 will be installed

...(중략)...

  python-tempita.noarch 0:0.5.1-8.el7

  python2-eventlet.noarch 0:0.17.4-4.el7
  python2-greenlet.x86_64 0:0.4.9-1.el7

Complete!
[root@object1 ~]#

 

 

 

Object Storage source repository 에서 accounting, container, object service 설정파일을 가져온다.

[root@object1 ~]# curl -o /etc/swift/account-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/account-server.conf-sample?h=stable/liberty
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  6147  100  6147    0     0   6194      0 --:--:-- --:--:-- --:--:--  6190
[root@object1 ~]# curl -o /etc/swift/container-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/container-server.conf-sample?h=stable/liberty
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  6976  100  6976    0     0   7501      0 --:--:-- --:--:-- --:--:--  7501
[root@object1 ~]# curl -o /etc/swift/object-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/object-server.conf-sample?h=stable/liberty
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 11819  100 11819    0     0  13514      0 --:--:-- --:--:-- --:--:-- 13507 

 

 

 

/etc/swift/account-server.conf 파일을 아래와 같이 수정한다.

[DEFAULT]섹션에서 bind IP address, bind port, user, configuration directory, mount point directory를 설정한다. bind_ip는 스토리지 노드의 IP를 입력한다.
[pipeline:main] 섹션에 모듈들을 활성화 시킨다.
[filter:recon]섹션에서 recon(meters) cache directory를 활성화 한다

[root@object1 ~]# vi /etc/swift/account-server.conf

[DEFAULT]
...
bind_ip = 10.0.0.51
bind_port = 6002
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = true
...


[pipeline:main]
pipeline = healthcheck recon account-server
...


[filter:recon]
use = egg:swift#recon
...
recon_cache_path = /var/cache/swift 

 

/etc/swift/container-server.conf 파일도 /etc/swift/account-server.conf 파일과 같은 설정을 해준다. bind_port만 6001로 설정하면 된다.

[root@object1 ~]# vi /etc/swift/container-server.conf

[DEFAULT]
...
bind_ip = 10.0.0.51
bind_port = 6001
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = true
...


[pipeline:main]
pipeline = healthcheck recon container-server
...


[filter:recon]
use = egg:swift#recon
...
recon_cache_path = /var/cache/swift 

 

 /etc/swift/object-server.conf 파일도 수정하며 역시나 설정이 거의 비슷하다. bind_port가 6000이며 recon_lock_path 설정이 추가 된다.

[root@object1 ~]# vi /etc/swift/object-server.conf

[DEFAULT]
...
bind_ip = 10.0.0.51
bind_port = 6000
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = true
...


[pipeline:main]
pipeline = healthcheck recon object-server

...


[filter:recon]
use = egg:swift#recon
...
recon_cache_path = /var/cache/swift

recon_lock_path = /var/lock

 

swift 마운트 포인트의 권한 설정을 한다.

[root@object1 ~]# chown -R swift:swift /srv/node

 

 

recon 디렉토리를 생성하고 권한 설정을 한다.

[root@object1 ~]# mkdir -p /var/cache/swift
[root@object1 ~]# chown -R root:swift /var/cache/swift

 

 

 

다시 컨트롤러 노드로 간다.

오브젝트 서비스를 시작하기 전에 스토리지 노드에 설치했던 서비스들 account, container, object 에 대해 ring을 만들어 주어야 한다.

ring builder는 각각의 노드들이 스토리지 아키텍처를 배포하기 위한 설정 파일을 만든다.

 

먼저 account ring을 만든다.

account server 서버는 account ring를 이용하여 container의 리스트를 관리한다.

/etc/swift 디렉토리로 이동하여 account.builder 파일을 만든다. 

[root@controller swift]# swift-ring-builder account.builder create 10 3 1

 

 

다음의 command를 통해 각각의 노드를 링에 추가시킨다. 여기서 STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS 는

각 object노드들의 IP이며 DEVICE_NAME는 해당 노드의 disk device 이름이다. DEVICE_WEIGHT은 100을 입력한다.

# swift-ring-builder account.builder  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6002 \
  --device DEVICE_NAME --weight DEVICE_WEIGHT

스토리지 노드에서 설정했던 4개의 disk를 각각 설정한다.

[root@controller swift]# swift-ring-builder account.builder add \
   --region 1 --zone 1 --ip 10.0.0.51 --port 6002 --device vdb --weight 100
Device d0r1z1-10.0.0.51:6002R10.0.0.51:6002/vdb_"" with 100.0 weight got id 0
[root@controller swift]# swift-ring-builder account.builder add \
   --region 1 --zone 2 --ip 10.0.0.51 --port 6002 --device vdc --weight 100
Device d1r1z2-10.0.0.51:6002R10.0.0.51:6002/vdc_"" with 100.0 weight got id 1
[root@controller swift]# swift-ring-builder account.builder add \
   --region 1 --zone 3 --ip 10.0.0.52 --port 6002 --device vdb --weight 100
Device d2r1z3-10.0.0.52:6002R10.0.0.52:6002/vdb_"" with 100.0 weight got id 2
[root@controller swift]# swift-ring-builder account.builder add \
   --region 1 --zone 4 --ip 10.0.0.52 --port 6002 --device vdc --weight 100
Device d3r1z4-10.0.0.52:6002R10.0.0.52:6002/vdc_"" with 100.0 weight got id 3

 

 

링에 잘 추가 되었는지 확인한다.

[root@controller swift]# swift-ring-builder account.builder
account.builder, build version 4
1024 partitions, 3.000000 replicas, 1 regions, 4 zones, 4 devices, 100.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 1
The overload factor is 0.00% (0.000000)
Devices:    id  region  zone      ip address  port  replication ip  replication port      name weight partitions balance meta
             0       1     1       10.0.0.51  6002       10.0.0.51              6002       vdb 100.00          0 -100.00
             1       1     2       10.0.0.51  6002       10.0.0.51              6002       vdc 100.00          0 -100.00
             2       1     3       10.0.0.52  6002       10.0.0.52              6002       vdb 100.00          0 -100.00
             3       1     4       10.0.0.52  6002       10.0.0.52              6002       vdc 100.00          0 -100.00

 

링을 rebalance 시킨다.

[root@controller swift]# swift-ring-builder account.builder rebalance
Reassigned 1024 (100.00%) partitions. Balance is now 0.00.  Dispersion is now 0.00 

 

컨테이너 링과 오브젝트 링도 같은 단계를 거친다. account 명령어를 수정하여 사용할 경우 포트 및 build파일명을 정확히 확인해야 한다.

[root@controller swift]# swift-ring-builder container.builder create 10 3 1

[root@controller swift]# 
[root@controller swift]# swift-ring-builder container.builder add \
   --region 1 --zone 1 --ip 10.0.0.51 --port 6001 --device vdb --weight 100
Device d0r1z1-10.0.0.51:6002R10.0.0.51:6002/vdb_"" with 100.0 weight got id 0
[root@controller swift]# swift-ring-builder container.builder add \
   --region 1 --zone 2 --ip 10.0.0.51 --port 6001 --device vdc --weight 100
Device d1r1z2-10.0.0.51:6002R10.0.0.51:6002/vdc_"" with 100.0 weight got id 1
[root@controller swift]# swift-ring-builder container.builder add \
   --region 1 --zone 3 --ip 10.0.0.52 --port 6001 --device vdb --weight 100
Device d2r1z3-10.0.0.52:6002R10.0.0.52:6002/vdb_"" with 100.0 weight got id 2
[root@controller swift]# swift-ring-builder container.builder add \
   --region 1 --zone 4 --ip 10.0.0.52 --port 6001 --device vdc --weight 100
Device d3r1z4-10.0.0.52:6002R10.0.0.52:6002/vdc_"" with 100.0 weight got id 3
[root@controller swift]#

[root@controller swift]# swift-ring-builder container.builder
container.builder, build version 4
1024 partitions, 3.000000 replicas, 1 regions, 4 zones, 4 devices, 100.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 1
The overload factor is 0.00% (0.000000)
Devices:    id  region  zone      ip address  port  replication ip  replication port      name weight partitions balance meta
             0       1     1       10.0.0.51  6001       10.0.0.51              6001       vdb 100.00          0 -100.00
             1       1     2       10.0.0.51  6001       10.0.0.51              6001       vdc 100.00          0 -100.00
             2       1     3       10.0.0.52  6001       10.0.0.52              6001       vdb 100.00          0 -100.00
             3       1     4       10.0.0.52  6001       10.0.0.52              6001       vdc 100.00          0 -100.00
[root@controller swift]#

[root@controller swift]# swift-ring-builder container.builder rebalance
Reassigned 1024 (100.00%) partitions. Balance is now 0.00.  Dispersion is now 0.00
[root@controller swift]#

[root@controller swift]#

[root@controller swift]# swift-ring-builder object.builder create 10 3 1

[root@controller swift]#
[root@controller swift]# swift-ring-builder object.builder add \
   --region 1 --zone 1 --ip 10.0.0.51 --port 6000 --device vdb --weight 100
Device d0r1z1-10.0.0.51:6002R10.0.0.51:6002/vdb_"" with 100.0 weight got id 0
[root@controller swift]# swift-ring-builder object.builder add \
   --region 1 --zone 2 --ip 10.0.0.51 --port 6000 --device vdc --weight 100
Device d1r1z2-10.0.0.51:6002R10.0.0.51:6002/vdc_"" with 100.0 weight got id 1
[root@controller swift]# swift-ring-builder object.builder add \
   --region 1 --zone 3 --ip 10.0.0.52 --port 6000 --device vdb --weight 100
Device d2r1z3-10.0.0.52:6002R10.0.0.52:6002/vdb_"" with 100.0 weight got id 2
[root@controller swift]# swift-ring-builder object.builder add \
   --region 1 --zone 4 --ip 10.0.0.52 --port 6000 --device vdc --weight 100
Device d3r1z4-10.0.0.52:6002R10.0.0.52:6002/vdc_"" with 100.0 weight got id 3

[root@controller swift]#
[root@controller swift]# swift-ring-builder object.builder
object.builder, build version 4
1024 partitions, 3.000000 replicas, 1 regions, 4 zones, 4 devices, 100.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 1
The overload factor is 0.00% (0.000000)
Devices:    id  region  zone      ip address  port  replication ip  replication port      name weight partitions balance meta
             0       1     1       10.0.0.51  6000       10.0.0.51              6000       vdb 100.00          0 -100.00
             1       1     2       10.0.0.51  6000       10.0.0.51              6000       vdc 100.00          0 -100.00
             2       1     3       10.0.0.52  6000       10.0.0.52              6000       vdb 100.00          0 -100.00
             3       1     4       10.0.0.52  6000       10.0.0.52              6000       vdc 100.00          0 -100.00
[root@controller swift]#

[root@controller swift]# swift-ring-builder object.builder rebalance
Reassigned 1024 (100.00%) partitions. Balance is now 0.00.  Dispersion is now 0.00

 

 

ring 구성 파일인 account.ring.gz, container.ring.gzobject.ring.gz 파일이 잘 생성되었는지 확인 후

모든 스토리지 노드 및 모든 추가 proxy서버(proxy서버 추가한 경우)들의 /etc/swift(위에서 설정파일에 swift_dir로 지정) 배포를 한다.

[root@controller swift]# ls
account.builder    container-reconciler.conf  object.builder       proxy-server.conf
account.ring.gz    container.ring.gz          object-expirer.conf  swift.conf
backups            container-server           object.ring.gz
container.builder  container-server.conf      proxy-server

[root@controller swift]# scp *.gz object1:/etc/swift
The authenticity of host 'object1 (10.0.0.51)' can't be established.
ECDSA key fingerprint is 99:e1:15:2b:05:9c:89:6b:1a:63:1d:e6:0e:7a:09:6e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'object1,10.0.0.51' (ECDSA) to the list of known hosts.
account.ring.gz                                              100% 1441     1.4KB/s   00:00
container.ring.gz                                            100% 1451     1.4KB/s   00:00
object.ring.gz                                               100% 1428     1.4KB/s   00:00
[root@controller swift]# scp *.gz object2:/etc/swift
The authenticity of host 'object2 (10.0.0.52)' can't be established.
ECDSA key fingerprint is 99:e1:15:2b:05:9c:89:6b:1a:63:1d:e6:0e:7a:09:6e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'object2,10.0.0.52' (ECDSA) to the list of known hosts.
account.ring.gz                                              100% 1441     1.4KB/s   00:00
container.ring.gz                                            100% 1451     1.4KB/s   00:00
object.ring.gz                                               100% 1428     1.4KB/s   00:00

[root@controller swift]#
 

 

Object Storage source repository에서 /etc/swift/swift.conf 파일을 가져와 아래와 같이 수정한다.

[swift-hash] 섹션에서 hash 설정을 한다. 이때 HASH_PATH_SUFFIXHASH_PATH_PREFIX는 unique한 값으로 교체를 하며

"openssl rand -hex 10" 커맨드를 통해 랜덤값으로 입력해주는 것이 좋다. 단 이 값은 보안을 유지하며 변경 또는 삭제하지 않는다.

[storage-policy:0] 섹션에서 기본 스토리지 정책을 설정한다. 

[root@controller ~]# curl -o /etc/swift/swift.conf \
   https://git.openstack.org/cgit/openstack/swift/plain/etc/swift.conf-sample?h=stable/liberty
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  7200  100  7200    0     0   5204      0  0:00:01  0:00:01 --:--:--  5206
[root@controller ~]# vi /etc/swift/swift.conf

[swift-hash]
...
swift_hash_path_suffix = HASH_PATH_SUFFIX
swift_hash_path_prefix = HASH_PATH_PREFIX
...

 

[storage-policy:0]
...
name = Policy-0
default = yes

 

 

/etc/swift/swift.conf 파일을 모든 swift관련 노드로 배포를 한다.

[root@controller ~]# scp /etc/swift/swift.conf object1:/etc/swift/swift.conf
swift.conf                                                   100% 7224     7.1KB/s   00:00
[root@controller ~]# scp /etc/swift/swift.conf object2:/etc/swift/swift.conf
swift.conf                                                   100% 7224     7.1KB/s   00:00

 

swift 관련 모든 노드에서 /etc/swift 및 하위 경로의 소유권을 수정한다.

[root@controller ~]# chown -R root:swift /etc/swift

 

 

controller 또는 다른 proxy 서비스가 설치된 서버에서 proxy 서비스 를 시작 해주고 서비스로 통록한다.

[root@controller ~]# systemctl enable openstack-swift-proxy.service memcached.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-proxy.service to /usr/lib/systemd/system/openstack-swift-proxy.service.
[root@controller ~]# systemctl start openstack-swift-proxy.service memcached.service

 

스토리지 노드들에서 관련 서비스를 시작 및 서비스 등록해준다.(object1, obect2 동일하게 수행한다.)

[root@object1 ~]# systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service \
   openstack-swift-account-reaper.service openstack-swift-account-replicator.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-account.service to /usr/lib/systemd/system/openstack-swift-account.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-account-auditor.service to /usr/lib/systemd/system/openstack-swift-account-auditor.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-account-reaper.service to /usr/lib/systemd/system/openstack-swift-account-reaper.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-account-replicator.service to /usr/lib/systemd/system/openstack-swift-account-replicator.service.

[root@object1 ~]#  

[root@object1 ~]# systemctl start openstack-swift-account.service openstack-swift-account-auditor.service \

   openstack-swift-account-reaper.service openstack-swift-account-replicator.service

[root@object1 ~]#  

[root@object1 ~]# systemctl enable openstack-swift-container.service \

   openstack-swift-container-auditor.service openstack-swift-container-replicator.service \
   openstack-swift-container-updater.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-container.service to /usr/lib/systemd/system/openstack-swift-container.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-container-auditor.service to /usr/lib/systemd/system/openstack-swift-container-auditor.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-container-replicator.service to /usr/lib/systemd/system/openstack-swift-container-replicator.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-container-updater.service to /usr/lib/systemd/system/openstack-swift-container-updater.service.

[root@object1 ~]# 
[root@object1 ~]# systemctl start openstack-swift-container.service \
   openstack-swift-container-auditor.service openstack-swift-container-replicator.service \
   openstack-swift-container-updater.service

[root@object1 ~]#
[root@object1 ~]# systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service \
   openstack-swift-object-replicator.service openstack-swift-object-updater.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-object.service to /usr/lib/systemd/system/openstack-swift-object.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-object-auditor.service to /usr/lib/systemd/system/openstack-swift-object-auditor.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-object-replicator.service to /usr/lib/systemd/system/openstack-swift-object-replicator.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-object-updater.service to /usr/lib/systemd/system/openstack-swift-object-updater.service.

[root@object1 ~]#
[root@object1 ~]# systemctl start openstack-swift-object.service openstack-swift-object-auditor.service \
   openstack-swift-object-replicator.service openstack-swift-object-updater.service

 

 

오브젝트 서비스가 잘 설치 되었는지 확인한다.

환경설정 스크립트에 identity API 버전 3 사용하여 object 스토리지 서비스 클라이언트 설정한다.

[root@controller ~]# echo "export OS_AUTH_VERSION=3" | tee -a admin-openrc.sh demo-openrc.sh
export OS_AUTH_VERSION=3

 

데모 유저 스크립트를 활성화 한다.

[root@controller ~]# source demo-openrc.sh 

 

서비스 상태를 확인한다.

[root@controller ~]# swift stat
        Account: AUTH_94f9c25aaa4246b0915afacca2d65c22
     Containers: 0
        Objects: 0
          Bytes: 0
X-Put-Timestamp: 1457586763.62553
    X-Timestamp: 1457586763.62553
     X-Trans-Id: tx26363eefacfc4ee68bb3b-0056e1024b
   Content-Type: text/plain; charset=utf-8

 

임시로 파일을 한번 올려본다. grance 설치시 내려 받은 파일을 올려본다.

[root@controller ~]# swift upload container1 cirros-0.3.4-x86_64-disk.img
cirros-0.3.4-x86_64-disk.img

 

컨테이너 리스트 및 상태를 확인한다.  파일 업로드 후 변경되었다.

[root@controller ~]# swift list
container1
[root@controller ~]# swift stat
                        Account: AUTH_94f9c25aaa4246b0915afacca2d65c22
                     Containers: 1
                        Objects: 2
                          Bytes: 26575872
Containers in policy "policy-0": 1
   Objects in policy "policy-0": 2
     Bytes in policy "policy-0": 26575872
    X-Account-Project-Domain-Id: default
                    X-Timestamp: 1457586773.61689
                     X-Trans-Id: tx2126c29c9f3b400cb6539-0056e103b0
                   Content-Type: text/plain; charset=utf-8
                  Accept-Ranges: bytes

 

 

올렸던 파일을 다운 받아 본다.

[root@controller ~]# swift download container1 cirros-0.3.4-x86_64-disk.img
cirros-0.3.4-x86_64-disk.img [auth 0.366s, headers 0.429s, total 0.505s, 95.682 MB/s]

 

블럭 스토리지 서비스는 인스턴스들에게 local disk 형태의 블럭 스토리지를 제공하는 서비스로 코드명은 cinder이다.

cinder는 기본적으로 lvm으로 볼륨을 관리하고 iSCSI로 서비스를 한다. lvm 방식외에도 타 벤더의 스토리지를 사용할때는

Ceph 방식도 존재하며 현재 설치 과정은 lvm방식으로 진행한다.

설치는 컨트롤러와 block1 노드에 설치 되며

 

우선 컨트롤러 부터 설치한다.

다른 서비스와 마찬가지로 DB 생성 및 권한설정을 한다. 

[root@controller ~]# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 732
Server version: 5.5.44-MariaDB MariaDB Server

Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> CREATE DATABASE cinder;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
       IDENTIFIED BY 'CINDER_DBPASS';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'controller' \
       IDENTIFIED BY 'CINDER_DBPASS';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
       IDENTIFIED BY 'CINDER_DBPASS';
Query OK, 0 rows affected (0.00 sec) 

 

 

cinder 유저를 생성하고 admin 롤을 부여한다.

[root@controller ~]# openstack user create --domain default --password-prompt cinder
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | default                          |
| enabled   | True                             |
| id        | c573d53072ee49c1945eeadffff98362 |
| name      | cinder                           |
+-----------+----------------------------------+

[root@controller ~]# openstack role add --project service --user cinder admin

 

cinder 및 cinderv2의 서비스를 만든다. 참고로 블럭 스토리지는 두개의 서비스 entity가 필요하다고함

[root@controller ~]# openstack service create --name cinder --description "OpenStack Block Storage" volume
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | 3eb647fd3fd446d99d5da6361189a4a3 |
| name        | cinder                           |
| type        | volume                           |
+-------------+----------------------------------+
[root@controller ~]# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | 5794fcb4caaf4ed2977d4dbf38fa11f5 |
| name        | cinderv2                         |
| type        | volumev2                         |
+-------------+----------------------------------+

 

두개의 서비스에 각각 인터널, 퍼블릭, 관리자 API endpoint를 생성한다.

[root@controller ~]# openstack endpoint create --region RegionOne volume public http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | 1d8ca25354cc42518c2bdaebb49b2beb        |
| interface    | public                                  |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 3eb647fd3fd446d99d5da6361189a4a3        |
| service_name | cinder                                  |
| service_type | volume                                  |
| url          | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne volume internal http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | 2037fde213a644ef8cf267d700055a2f        |
| interface    | internal                                |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 3eb647fd3fd446d99d5da6361189a4a3        |
| service_name | cinder                                  |
| service_type | volume                                  |
| url          | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne volume admin http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | 93c669d42b344fa7b4d8e169bbf9d88c        |
| interface    | admin                                   |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 3eb647fd3fd446d99d5da6361189a4a3        |
| service_name | cinder                                  |
| service_type | volume                                  |
| url          | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | ec98e3e108224304a2a961c6aa621f4c        |
| interface    | public                                  |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 5794fcb4caaf4ed2977d4dbf38fa11f5        |
| service_name | cinderv2                                |
| service_type | volumev2                                |
| url          | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | d8ac4574dd964f30bbe0699af65168ba        |
| interface    | internal                                |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 5794fcb4caaf4ed2977d4dbf38fa11f5        |
| service_name | cinderv2                                |
| service_type | volumev2                                |
| url          | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | 2ea1438ad32d4fa4b065afec0e7624d9        |
| interface    | admin                                   |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 5794fcb4caaf4ed2977d4dbf38fa11f5        |
| service_name | cinderv2                                |
| service_type | volumev2                                |
| url          | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+

 

패키지를 설치한다.

[root@controller ~]# yum install -y openstack-cinder python-cinderclient
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: centos.mirror.cdnetworks.com
 * extras: centos.mirror.cdnetworks.com
 * updates: centos.mirror.cdnetworks.com
Package python-cinderclient-1.4.0-1.el7.noarch already installed and latest version
Resolving Dependencies
--> Running transaction check
---> Package openstack-cinder.noarch 1:7.0.1-1.el7 will be installed

...(중략)...

  qemu-img.x86_64 10:1.5.3-105.el7_2.3
  rsyslog-mmjsonparse.x86_64 0:7.4.7-12.el7                 

  sysfsutils.x86_64 0:2.1.0-16.el7

Complete!
[root@controller ~]#

 

/etc/cinder/cinder.conf 파일을 아래와 같이 수정한다.

[database] 섹션에 db 접근 설정을 한다. CINDER_DBPASS는 설정된 패스워드로 변경한다.
[DEFAULT] 와 [oslo_messaging_rabbit] 섹션에 RabbitMQ 설정을 한다. RABBIT_PASS는 설정된 패스워드로 변경한다.
[DEFAULT] 와 [keystone_authtoken] 섹션에 인증 설정을 한다. CINDER_PASS는 설정된 패스워드로 변경. [keystone_authtoken] 엔 해당 내용외에는 모두 지운다.

[DEFAULT] 섹션의 my_ip에 cinder 서비스가 수행되는 컨트롤러의 IP를 설정한다.
[oslo_concurrency] 섹션에 lock path를 설정한다.

(옵션)[DEFAULT] 섹션에서 트러블슈팅시 도움이 될수 있게 verbose를 활성화한다.

[root@controller ~]# vi /etc/cinder/cinder.conf

[DEFAULT]
...
rpc_backend = rabbit
...
auth_strategy = keystone
...
my_ip = 10.0.0.11

...
verbose = True

...


 


[database]
...
connection = mysql://cinder:CINDER_DBPASS@controller/cinder
...


[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
...


[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = CINDER_PASS
...


[oslo_concurrency]
...
lock_path = /var/lib/cinder/tmp

 

/etc/nova/nova.conf파일을 수정한다.

[root@controller ~]# vi /etc/nova/nova.conf

 

[cinder]
os_region_name = RegionOne

 

수정한 Nova 재시작, Cinder 시작 및 부팅시 시작하게 설정한다.

[root@controller ~]# systemctl restart openstack-nova-api.service
[root@controller ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-api.service to /usr/lib/systemd/system/openstack-cinder-api.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-scheduler.service to /usr/lib/systemd/system/openstack-cinder-scheduler.service.
[root@controller ~]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

 

 

 

block 노드에 패키지를 설치하기 전에 host 박스에 먼저 접속한다.

block1 노드에 서비스 용 디바이스를 붙여준다.

[root@localhost ~]# qemu-img create -f qcow2 /home/blockdisk.qcow2 100G
Formatting '/home/blockdisk.qcow2', fmt=qcow2 size=107374182400 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
[root@localhost ~]#
[root@localhost ~]# chown qemu:qemu /home/blockdisk.qcow2
[root@localhost ~]#
[root@localhost ~]# virsh pool-refresh home
Pool home refreshed

[root@localhost ~]# virsh vol-list home
 Name                 Path
------------------------------------------------------------------------------
 blockdisk.qcow2      /home/blockdisk.qcow2
 CentOS-6.6-x86_64-bin-DVD1.iso /home/CentOS-6.6-x86_64-bin-DVD1.iso
 CentOS-7-x86_64-DVD-1503-01.iso /home/CentOS-7-x86_64-DVD-1503-01.iso
 CentOS-7-x86_64-DVD-1511.iso /home/CentOS-7-x86_64-DVD-1511.iso
...(중략)...

[root@localhost ~]# vi /etc/libvirt/qemu/newstorage.xml

<disk type='file' device='disk'>
   <driver name='qemu' type='qcow2'/>
   <source file='/home/blockdisk.qcow2'/>
   <target dev='vdb'/>
</disk>

[root@localhost ~]# virsh attach-device --config block1 /etc/libvirt/qemu/newstorage.xml --live
Device attached successfully

 

 

이제 block1 노드에서 작업한다.

디바이스 붙은거 부터 확인해준다.

[root@block1 ~]# parted -l
Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/centos-home: 44.8GB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system  Flags
 1      0.00B  44.8GB  44.8GB  xfs

...(중략)...


Error: /dev/vdb: unrecognised disk label
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 107GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:

 

먼저 lvm 패키지를 설치 후 서비스를 등록한다. 리눅스 버전에 따라 설치 되어 있으면 생략한다.

[root@block1 ~]# yum -y install lvm2
Determining fastest mirrors
 * base: ftp.neowiz.com
 * extras: ftp.neowiz.com
 * updates: ftp.neowiz.com
Package 7:lvm2-2.02.130-5.el7_2.1.x86_64 already installed and latest version
Nothing to do

[root@block1 ~]# 

[root@block1 ~]# systemctl enable lvm2-lvmetad.service
Created symlink from /etc/systemd/system/sysinit.target.wants/lvm2-lvmetad.service to /usr/lib/systemd/system/lvm2-lvmetad.service.
[root@block1 ~]# systemctl start lvm2-lvmetad.service

 

PV 생성, VG 생성 순서로

[root@block1 ~]# pvcreate /dev/vdb
  Physical volume "/dev/vdb" successfully created

[root@block1 ~]# vgcreate cinder-volumes /dev/vdb
  Volume group "cinder-volumes" successfully created

 

LVM에서 기본적으로 시스템의 /dev 밑의 모든 디스크를 검색하기 때문에 제외 설정을 해줘야 한다.

/etc/lvm/lvm.conf 파일을 아래와 같이 수정한다. a/가 Accept, r/은 reject이며 기본적으로 cinder에서 사용할 vdb은 포함하고

루트에서 사용중인 vda의 경우도 lvm을 사용중이라면 포함시킨다. 그외에 모든 장비는 제외시킨다.

[root@block1 ~]# vi /etc/lvm/lvm.conf

...

devices {

...
filter = [ "a/vda/", "a/vdb/", "r/.*/"] 

 

이제 cinder 패키지를 설치한다.

[root@block1 ~]# yum install -y openstack-cinder targetcli python-oslo-policy

Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: ftp.neowiz.com
 * extras: ftp.neowiz.com
 * updates: ftp.neowiz.com
Resolving Dependencies
--> Running transaction check
---> Package openstack-cinder.noarch 1:7.0.1-1.el7 will be installed

...(중략)...

  sysfsutils.x86_64 0:2.1.0-16.el7                          

  tbb.x86_64 0:4.1-9.20130314.el7

Complete!

[root@block1 ~]#

 

/etc/cinder/cinder.conf 파일을 아래와 같이 수정한다.

[database] 섹션에서 DB 접근을 설정한다. CINDER_DBPASS는 설정한 패스워드로 바꾼다.
[DEFAULT] 와 [oslo_messaging_rabbit] 섹션에서 RabbitMQ 설정을 한다.  RABBIT_PASS는 설정한 패스워드로 바꾼다.
[DEFAULT] 와 [keystone_authtoken] 섹션에 인증서비스 정보를 설정을 한다. CINDER_PASS는 설정한 패스워드로 바꾼다.
[DEFAULT] 섹션에서 my_ip 를 block1노드의 아이피인 10.0.0.41로 설정한다.
[lvm] 섹션에 lvm 관련 설정을 해준다.
[DEFAULT] 섹션에서 LVM backend를 활성화 한다.
[DEFAULT] 섹션에서 Image service의 위치를 설정한다.
[oslo_concurrency] 에서 lock path를 설정한다.

(옵션)[DEFAULT] 섹션에서 트러블슈팅시 도움이 될수 있게 verbose를 활성화한다.

[root@block1 ~]# vi /etc/cinder/cinder.conf

[DEFAULT]
...
rpc_backend = rabbit
...
auth_strategy = keystone
...
my_ip = 10.0.0.41
...
enabled_backends = lvm

...
verbose = True

...


[database]
...
connection = mysql://cinder:CINDER_DBPASS@controller/cinder
...


[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
...


[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = CINDER_PASS
...


[lvm]
...
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
...


[oslo_concurrency]
...
lock_path = /var/lib/cinder/tmp 

 

서비스 시작 및 부팅시 서비스 등록

[root@block1 ~]# systemctl enable openstack-cinder-volume.service target.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-volume.service to /usr/lib/systemd/system/openstack-cinder-volume.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/target.service to /usr/lib/systemd/system/target.service.
[root@block1 ~]# systemctl start openstack-cinder-volume.service target.service

 

 

 

 

서비스 확인을 위해 컨트롤로 노드로 이동

서비스 리스트로 확인하며 호스트당 1개씩 서비스 확인을 한다.

[root@controller ~]# cinder service-list
+------------------+------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |    Host    | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller | nova | enabled |   up  | 2016-03-05T20:10:20.000000 |        -        |
|  cinder-volume   | block1@lvm | nova | enabled |   up  | 2016-03-05T20:10:27.000000 |        -        |
+------------------+------------+------+---------+-------+----------------------------+-----------------+

 

 

 

 

 

대쉬보드는 웹기반으로 오픈스택을 관리할수 있게 해주며 코드명은 horizon이다.

설치는 컨트롤로 노드에만 한다.

 

일단 패키지를 설치한다.

[root@controller ~]# yum install -y openstack-dashboard
Loaded plugins: fastestmirror
Determining fastest mirrors
 * base: centos.mirror.cdnetworks.com
 * extras: centos.mirror.cdnetworks.com
 * updates: centos.mirror.cdnetworks.com
Resolving Dependencies
--> Running transaction check
---> Package openstack-dashboard.noarch 1:8.0.0-1.el7 will be installed

 

...(중략)...

  python2-XStatic-roboto-fontface.noarch 0:0.4.3.2-4.el7

  python2-django-openstack-auth.noarch 0:2.0.1-1.el7        

  roboto-fontface-common.noarch 0:0.4.3.2-4.el7        

  roboto-fontface-fonts.noarch 0:0.4.3.2-4.el7
  web-assets-filesystem.noarch 0:5-1.el7

Complete!
[root@controller ~]#

 

/etc/openstack-dashboard/local_settings 파일을 아래와 같이 수정한다.

대쉬보드가 오픈스택 서비스들을 사용하게 하기 위해 controller노드를 지정해준다.

모든 호스트가 대쉬보드에 접근할수 있게 설정한다.

memcached 사용하게 설정한다. (주변의 다른 CACHES 설정은 주석처리 한다)

대시보드로 유저 생성시 default 롤을 user로 설정한다.
multi-domain model을 활성화 한다
Keystone V3 API에서 대시보드에 로그인 할수 있도록 서비스 API 버전을 설정한다.
표준시간대를 설정한다.

[root@controller ~]# vi /etc/openstack-dashboard/local_settings

...

OPENSTACK_HOST = "controller"
...

ALLOWED_HOSTS = ['*', ]

...

CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': '127.0.0.1:11211',
    }
}
...

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
...
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
...
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "volume": 2,
}
...
TIME_ZONE = "Asia/Seoul"

 

httpd와 memcached 재시작 및 부팅시 시작하게 설정한다.

[root@controller ~]# systemctl enable httpd.service memcached.service
[root@controller ~]# systemctl restart httpd.service memcached.service

 

 

설치 확인을 위해 controller와 통신이 되는 PC 또는 서버에서 웹브라우저를 사용하여 http://controller/dashboard 에 접근한다.

 

 

 

 

 

네트워크 서비스 설치 순서이다. 네트워크 서비스의 코드명은 Neutron이며 SDN(Software Defined Network)을 구현하기 위해 필요한 서비스이다.

routing, firewall, load balancer, virtual private network (VPN) 의 구현이 가능하다.

예전 버전 설치 가이드에서는 네트워크 노드를 따로 구성했던 중요한 서비스

 

설치는 컨트롤러 노드와 컴퓨트 노드에 설치되며 설치 순서는 다른 서비스와 마찬가지이다.

 

컨트롤러 노드 부터 설정한다.

우선 DB 생성 및 권한 설정을 한다. NEUTRON_DBPASS는 개인이 설정할 DB 패스워드로 변경한다.

 

[root@controller ~]# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 400
Server version: 5.5.44-MariaDB MariaDB Server

Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> CREATE DATABASE neutron;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
       IDENTIFIED BY 'NEUTRON_DBPASS';

Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'controller' \

       IDENTIFIED BY 'NEUTRON_DBPASS';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
       IDENTIFIED BY 'NEUTRON_DBPASS';
Query OK, 0 rows affected (0.00 sec)

 

CLI에서 오픈스택 command를 원할하게 수행하기 위해 admin 계정을 export 해준다. 기존 설치에서 세션이 이어진다면 생략가능하다.

[root@controller ~]# source admin-openrc.sh
 

 

Neutron 유저를 생성하고 admin 롤을 부여한다.

[root@controller ~]# openstack user create --domain default --password-prompt neutron
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | default                          |
| enabled   | True                             |
| id        | 04f45beeb2a54c5bbadf3b7845c62d86 |
| name      | neutron                          |
+-----------+----------------------------------+

[root@controller ~]# openstack role add --project service --user neutron admin

 

Neutron 서비스를 만든다.

[root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Networking             |
| enabled     | True                             |
| id          | c47974b46d0c4b3781588c9b8817944a |
| name        | neutron                          |
| type        | network                          |
+-------------+----------------------------------+

 

 

Neutron 서비스의 API 엔드포인트를 만든다.

[root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 457104ab1eef491f81bb7aa197f55b69 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | c47974b46d0c4b3781588c9b8817944a |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne network internal http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | c0e4a873e04a4375a92c42ea9284e9de |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | c47974b46d0c4b3781588c9b8817944a |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne network admin http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | de4df65f8e5e4ed6802ceaf7eae57a36 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | c47974b46d0c4b3781588c9b8817944a |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+

 

 

패키지를 설치한다.

[root@controller ~]# yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge python-neutronclient ebtables ipset
Loading mirror speeds from cached hostfile
 * base: ftp.neowiz.com
 * extras: ftp.neowiz.com
 * updates: ftp.neowiz.com
Package python-neutronclient-3.1.0-1.el7.noarch already installed and latest version
Package ebtables-2.0.10-13.el7.x86_64 already installed and latest version
Resolving Dependencies
--> Running transaction check
---> Package ipset.x86_64 0:6.19-4.el7 will be installed
--> Processing Dependency: ipset-libs = 6.19-4.el7 for package: ipset-6.19-4.el7.x86_64

...(중략)...

  python-singledispatch.noarch 0:3.4.0.2-2.el7

  python-webtest.noarch 0:1.3.4-6.el7
  python2-pecan.noarch 0:1.0.2-2.el7      

  radvd.x86_64 0:1.9.2-9.el7

Complete!
[root@controller ~]#


 

이후 단계에서는 네트워크 설정에 관해서 2가지 옵션이 있다.

Provider networks와 Self-service networks이며 Provider networks는 Layer 2기반으로 networks, routers, floating IP addresses 설정 없이

물리 네트워크에 의존하는 간단한 설정이며 Self-service networks은 VXLAN과 같은 오버레이 기술을 이용하여 Layer 3(라우팅)을 구현한다.

이후로는 Self-service networks 에 따라 설정을 함.

 

/etc/neutron/neutron.conf 파일을 아래와 같이 수정한다.

[database]섹션에 database 연결 정보를 입력한다. NEUTRON_DBPASS는 개인이 설정한 패스워드로 변경한다.

[DEFAULT]섹션에서 Modular Layer 2 (ML2) 및 router 모듈, 오버랩핑 IP 허용을 활성화 시킨다.

[DEFAULT] 와 [oslo_messaging_rabbit] 섹션에서 RabbitMQ 메세지 큐 설정을 한다. RABBIT_PASS는 개인이 설정한 패스워드로 변경한다.

[DEFAULT] 와 [keystone_authtoken] 섹션에서 Identity 서비스 접근 설정을 한다. NEUTRON_PASS는 개인이 설정한 패스워드로 변경한다. [keystone_authtoken]은

해당 내용 외에는 모든 내용을 주석처리하거나 삭제한다.

[DEFAULT] and [nova] 섹션에 네트워크 토폴리지가 변경됨을 알린다.

[oslo_concurrency] 섹션에 lock path 설정을 한다.

 (옵션)[DEFAULT] 섹션에서 트러블슈팅시 도움이 될수 있게 verbose를 활성화한다.

[root@controller ~]# vi /etc/neutron/neutron.conf
[DEFAULT]

...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True

...

rpc_backend = rabbit

...
auth_strategy = keystone

...

notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://controller:8774/v2

...
verbose = True

...


 

 

[database]
...
connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron

...

[oslo_messaging_rabbit]

...

rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS

...

[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = NEUTRON_PASS

...

[nova]
...
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS

...

[oslo_concurrency]
...
lock_path = /var/lib/neutron/tmp

 

 

Modular Layer 2 (ML2) plug-in를 설정을 위해 /etc/neutron/plugins/ml2/ml2_conf.ini 파일을 아래와 같이 수정한다.

 

[root@controller ~]# vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]

 

type_drivers = flat,vlan,vxlan
...

tenant_network_types = vxlan
...

mechanism_drivers = linuxbridge,l2population

...

extension_drivers = port_security

...

[ml2_type_flat]
...
flat_networks = public
...

[ml2_type_vxlan]
...
vni_ranges = 1:1000
...

[securitygroup]
...
enable_ipset = True 


 

Linux bridge agent 설정을 위해 /etc/neutron/plugins/ml2/linuxbridge_agent.ini 파일을 아래와 같이 수정한다.

[linux_bridge] 섹션에 public 가상 네트워크에 public 네트워크 인터페이스를 맵핑시켜준다.

[vxlan] 섹션에서 vxlan및 layer-2 population을 활성화 시켜주며 overlay 네트워크를 핸들링할 물리 네트워크의 주소를 입력한다. 여기서는 10.0.0.11을 입력

[agent] 섹션에서 ARP spoofing protection을 활성화 시킨다.

[securitygroup] 섹션에서 security groups을 활성화 시키고 Linux bridge iptables firewall driver를 설정한다.

[root@controller ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]

...

physical_interface_mappings = public:eth1

#physical_interface_mappings = public:PUBLIC_INTERFACE_NAME

...

[vxlan]
enable_vxlan = True
local_ip = 10.0.0.1

#local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = True

..

[agent]
...
prevent_arp_spoofing = True

...

[securitygroup]
...
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

 

 

가상 NAS네트워크 및 Layer 3 설정을 위해 /etc/neutron/l3_agent.ini파일을 아래와 같이 수정한다.

[DEFAULT] 섹션에서 Linux bridge interface driver 및 external network bridge 구성. external network bridge는 다수의 external networks 이 가능하게 의도적으로 비워둔다.

[root@controller ~]# vi /etc/neutron/l3_agent.ini
[DEFAULT]

...
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
external_network_bridge =

 

가상 네트워크에 DHCP 서비스를 제공하기 위해 /etc/neutron/dhcp_agent.ini파일을 아래와 같이 수정한다.

[DEFAULT]섹션에 Linux bridge interface driver, Dnsmasq DHCP driver 설정 및

public 네트워크 상의 인스턴스가 metadata에 접근할수 있게 isolated_metadata 을 활성화 시킨다.

 

[root@controller ~]# vi /etc/neutron/dhcp_agent.ini
[DEFAULT]

...
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True  

 

DHCP 설정관련 추가로 VXLAN과 같은 오버레이 네트워크 들은 패킷의 헤더에 페이로드 및 유저정보가 포함되어 있다. 근데 이때 가상네트워크 임을 모르는 상태에서

Instance들이 default상태의 mtu값 1500으로 패킷을 전송시 성능저하 또는 연결문제가 생길수 있어 아래와 같이 Instance들의 mtu 값을 1450으로 DHCP에서 설정한다.

(일부 클라우드 이미지들은 이러한 DHCP의 옵션을 무시하는 경우가 있어 이때는 메타 데이터, 스크립트, 또는 다른 적합 한 메서드를 사용하여 구성해야 한다.)

[DEFAULT] 섹션에 dnsmasq구성파일을 사용하게 설정한다.

[DEFAULT]

...
dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf

 

/etc/neutron/dnsmasq-neutron.conf 파일을 생성하여 편집한다.

[root@controller ~]# vi /etc/neutron/dnsmasq-neutron.conf
dhcp-option-force=26,1450

 

 

메타데이터 에이전트를 설정하기 위해 /etc/neutron/metadata_agent.ini 파일을 아래와 같이 수정한다.

메타데이터 에이전트는 인스턴스에 자격증명 등의 구성정보를 제공한다.

[DEFAULT] 섹션에 access 파라메터 설정을 한다. NEUTRON_PASS는 설정한 패스워드로 변경한다.

[DEFAULT] 섹션에 metadata host를 controller로 설정한다.

[DEFAULT] metadata proxy shared secret 설정을 한다.

 

[root@controller ~]# vi /etc/neutron/metadata_agent.ini

[DEFAULT]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_region = RegionOne
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = NEUTRON_PASS

...
nova_metadata_ip = controller

...
metadata_proxy_shared_secret = METADATA_SECRET

 

compute 서비스가 neutron을 사용하도록 하기 위해 /etc/nova/nova.conf 를 아래와 같이 수정한다.

[neutron] 섹션에서 access 파라메터 설정, metadata proxy 활성화 secret 설정을 한다.

 

네트워킹 서비스는 ML2 plug-in 설정 파일인 /etc/neutron/plugins/ml2/ml2_conf.ini를 가리키는 /etc/neutron/plugin.ini라는 심볼릭 링크를 생성한다

만약 이 심볼릭 링크가 없다면 만들어 준다.

 

[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
[root@controller ~]# ls -alrt /etc/neutron/plugin.ini
lrwxrwxrwx. 1 root root 37 Mar  5 16:50 /etc/neutron/plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini

 

DB의 tables 생성해준다.

[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
   --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

 

Nova 설정 파일이 변경이 있었으니 nova 서비스를 재시작 해주고 Neutron도 시작 및 부팅시 시작할수 있게 설정한다.

[root@controller ~]# systemctl restart openstack-nova-api.service
[root@controller ~]# systemctl enable neutron-server.service \
   neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
   neutron-metadata-agent.service neutron-l3-agent.service
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-server.service to /usr/lib/systemd/system/neutron-server.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-dhcp-agent.service to /usr/lib/systemd/system/neutron-dhcp-agent.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-metadata-agent.service to /usr/lib/systemd/system/neutron-metadata-agent.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-l3-agent.service to /usr/lib/systemd/system/neutron-l3-agent.service.
[root@controller ~]# systemctl start neutron-server.service \
   neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
   neutron-metadata-agent.service neutron-l3-agent.service

[root@controller ~]#  

 

다음은 컴퓨트 노드를 설정한다

패키지부터 설치한다.

[root@compute1 ~]# yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset
Loaded plugins: fastestmirror
Determining fastest mirrors
 * base: centos.mirror.cdnetworks.com
 * extras: centos.mirror.cdnetworks.com
 * updates: centos.mirror.cdnetworks.com
Package ebtables-2.0.10-13.el7.x86_64 already installed and latest version
Resolving Dependencies
--> Running transaction check
---> Package ipset.x86_64 0:6.19-4.el7 will be installed
--> Processing Dependency: ipset-libs = 6.19-4.el7 for package: ipset-6.19-4.el7.x86_64

 

...(중략)...

  python-webtest.noarch 0:1.3.4-6.el7

  python2-pecan.noarch 0:1.0.2-2.el7

Complete!


 

 

/etc/neutron/neutron.conf파일을 수정하여 아래와 같이 수정한다.

[DEFAULT] 과 [oslo_messaging_rabbit] 섹션에 RabbitMQ 접근 설정을 한다. RABBIT_PASS 는 설정한 패스워드로 변경
[DEFAULT] 과 [keystone_authtoken] 섹션에 인증정보를 입력한다. NEUTRON_PASS 는 설정한 패스워드로 변경.

keystone_authtoken 섹션은 입력한 내용 외에는 모두 삭제 또는 주석처리를 한다.
[oslo_concurrency] 섹션에 lackpath 설정을 한다.

 

[root@compute1 ~]# vi /etc/neutron/neutron.conf

[DEFAULT]
...
rpc_backend = rabbit
...
auth_strategy = keystone
...
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
...
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = NEUTRON_PASS
...
[oslo_concurrency]
...
lock_path = /var/lib/neutron/tmp

 

Linux bridge agent 설정을 위해 /etc/neutron/plugins/ml2/linuxbridge_agent.ini 파일을 아래와 같이 수정한다.

[linux_bridge] 섹션에 public 가상 네트워크에 public 네트워크 인터페이스를 맵핑시켜준다.

[vxlan] 섹션에서 vxlan및 layer-2 population을 활성화 시켜주며 overlay 네트워크를 핸들링할 물리 네트워크의 주소를 입력한다. 여기서는 10.0.0.31을 입력한다

[agent] 섹션에서 ARP spoofing protection을 활성화 시킨다.

[securitygroup] 섹션에서 security groups을 활성화 시키고 Linux bridge iptables firewall driver를 설정한다.

[root@compute1 ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]

...

physical_interface_mappings = public:eth1

#physical_interface_mappings = public:PUBLIC_INTERFACE_NAME

...

[vxlan]
enable_vxlan = True
local_ip = 10.0.0.31

#local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = True

..

[agent]
...
prevent_arp_spoofing = True

...

[securitygroup]
...
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

 

 

/etc/nova/nova.conf 파일을 다음과 같이 수정한다.

[neutron] 섹션에 다음과 같이 인증정보를 입력한다. NEUTRON_PASS는 설정한 패스워드로 변경한다.

[root@compute1 ~]# vi /etc/nova/nova.conf
[neutron]

...
url = http://controller:9696
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS

 

Nova 설정 파일이 변경이 있었으니 nova 서비스를 재시작 해주고 Neutron도 시작 및 부팅시 시작할수 있게 설정한다.

[root@compute1 ~]# systemctl restart openstack-nova-compute.service
[root@compute1 ~]# systemctl enable neutron-linuxbridge-agent.service
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.
[root@compute1 ~]# systemctl start neutron-linuxbridge-agent.service

 

 

설치 확인을 위해 컨트롤로 노드로 다시 이동한다.

neutron  ext-list 명령어를 통해 잘 설치가 되었는지 확인한다.

[root@controller ~]# neutron ext-list
+-----------------------+-----------------------------------------------+
| alias                 | name                                          |
+-----------------------+-----------------------------------------------+
| dns-integration       | DNS Integration                               |
| ext-gw-mode           | Neutron L3 Configurable external gateway mode |
| binding               | Port Binding                                  |
| agent                 | agent                                         |
| subnet_allocation     | Subnet Allocation                             |
| l3_agent_scheduler    | L3 Agent Scheduler                            |
| external-net          | Neutron external network                      |
| flavors               | Neutron Service Flavors                       |
| net-mtu               | Network MTU                                   |
| quotas                | Quota management support                      |
| l3-ha                 | HA Router extension                           |
| provider              | Provider Network                              |
| multi-provider        | Multi Provider Network                        |
| extraroute            | Neutron Extra Route                           |
| router                | Neutron L3 Router                             |
| extra_dhcp_opt        | Neutron Extra DHCP opts                       |
| security-group        | security-group                                |
| dhcp_agent_scheduler  | DHCP Agent Scheduler                          |
| rbac-policies         | RBAC Policies                                 |
| port-security         | Port Security                                 |
| allowed-address-pairs | Allowed Address Pairs                         |
| dvr                   | Distributed Virtual Router                    |
+-----------------------+-----------------------------------------------+

 

 

agent-list를 확인한다. 컨트롤러에 4개 컴퓨트에 1개 설치한 것이 잘 보여야 한다.

[root@controller nova]# neutron agent-list
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host       | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| 4d8264e6-41a8-4063-9e20-e661251a0eb2 | Linux bridge agent | controller | :-)   | True           | neutron-linuxbridge-agent |
| 6bc769ba-f740-4c62-96d9-fccf9bd285d9 | Linux bridge agent | compute1   | :-)   | True           | neutron-linuxbridge-agent |
| 89fbdf23-7421-43eb-8121-12d7641b0861 | DHCP agent         | controller | :-)   | True           | neutron-dhcp-agent        |
| b75d266f-9c8b-4705-af65-ce31bcc4aa69 | Metadata agent     | controller | :-)   | True           | neutron-metadata-agent    |
| c5f8e1a6-b711-48bc-97c6-efa46200b42d | L3 agent           | controller | :-)   | True           | neutron-l3-agent          |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+

 

 

 

compute서비스는 하이퍼바이저 역활을 하여 인스턴스를 생성, 부팅, 삭제등 관리하는 역활을 하며 코드명은 nova.

nova는 controller노드 및 실제 compute 노드에서 인스턴스를 관리하기 위해 compute노드에도 설치된다.

 

 

우선 controller 노드 설치부터

 

다른 서비스들과 마찬가지로 DB 생성부터한다. 역시나 nova db 패스워드 변경시 'NOVA_DBPASS'를 변경한다.

[root@controller ~]# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 55
Server version: 5.5.44-MariaDB MariaDB Server

Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> CREATE DATABASE nova;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
       IDENTIFIED BY 'NOVA_DBPASS';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'controller' \
       IDENTIFIED BY 'NOVA_DBPASS';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
       IDENTIFIED BY 'NOVA_DBPASS';
Query OK, 0 rows affected (0.00 sec)

 

 

openstack admin 계정을 cli에서 활성화 시켜준다.(이전 서비스 설치와 계속 이어지고 있다면 생략)

[root@controller ~]# source admin-openrc.sh

 

 

nova 유저를 생성 및 admin 롤 부여

[root@controller ~]# openstack user create --domain default --password-prompt nova
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | default                          |
| enabled   | True                             |
| id        | bf828850e9ec4bcea3695216fd9728b9 |
| name      | nova                             |
+-----------+----------------------------------+

[root@controller ~]# openstack role add --project service --user nova admin 

 

nova 서비스를 생성한다.

[root@controller ~]# openstack service create --name nova --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Compute                |
| enabled     | True                             |
| id          | a8522546b3ae4f509c9abeeea8229d04|
| name        | nova                             |
| type        | compute                          |
+-------------+----------------------------------+

 

 

admin, internal, public 접근에 대한 endpoint를 생성한다.

[root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | f56a530908a04d14915dd872c1773da1        |
| interface    | public                                  |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | a8522546b3ae4f509c9abeeea8229d04        |
| service_name | nova                                    |
| service_type | compute                                 |
| url          | http://controller:8774/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | a3ea42ba57ba472689fbb9639062ed7c        |
| interface    | internal                                |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | a8522546b3ae4f509c9abeeea8229d04        |
| service_name | nova                                    |
| service_type | compute                                 |
| url          | http://controller:8774/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | f5482c450e664a82b34f96e798e975e2        |
| interface    | admin                                   |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | a8522546b3ae4f509c9abeeea8229d04        |
| service_name | nova                                    |
| service_type | compute                                 |
| url          | http://controller:8774/v2/%(tenant_id)s |
+--------------+-----------------------------------------+

 

 

nova 관련 패키지를 설치한다.

[root@controller ~]# yum install -y openstack-nova-api openstack-nova-cert \
    openstack-nova-conductor openstack-nova-console \
    openstack-nova-novncproxy openstack-nova-scheduler \
    python-novaclient
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: ftp.neowiz.com
 * extras: ftp.neowiz.com
 * updates: ftp.neowiz.com
Package 1:python-novaclient-2.30.1-1.el7.noarch already installed and latest version
Resolving Dependencies
--> Running transaction check
---> Package openstack-nova-api.noarch 1:12.0.1-1.el7 will be installed

...(중략)...

 python2-oslo-reports.noarch 0:0.5.0-1.el7

Complete!

[root@controller ~]#

 

/etc/nova/nova.conf 파일을 다음과 같이 수정한다.

[database] 섹션에서 nova db 연결설정

[DEFAULT] 와 [oslo_messaging_rabbit]에 Message queue 서비스에 설정했던 내용으로 수정, RABBIT_PASS는 각자 수정한 패스워드로..

[DEFAULT] 와 [keystone_authtoken] 섹션을 Identity 서비스에 설정했던 내용으로 수정, NOVA_PASS는 각자 수정한 패스워드로..

[DEFAULT] 섹션에 my_ip에 컨트롤러 노드의 아이피를 설정한다.

[DEFAULT] 섹션에 Networking 서비스를 지원할수 있게 설정해준다.

[vnc] 섹션에 my_ip 변수를 이용하여 vnc서버 및 프록시클라이언트 설정을 해준다.

[glance] 섹션의 host에 glance가 설치된 컨트롤러 노드를 지정한다.

[oslo_concurrency]섹션의 lock_path를 설정한다.

[DEFAULT] 섹션의 enabled_apis의 주석 제거시 ec2 내용을 삭제하여 disables 시킨다.

(옵션)[DEFAULT] 섹션에서 트러블슈팅시 도움이 될수 있게 verbose를 활성화한다.

 

[root@controller ~]# vi /etc/nova/nova.conf

[DEFAULT]
...
rpc_backend = rabbit

...

auth_strategy=keystone

...
my_ip = 10.0.0.11

...

network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver

...
enabled_apis=osapi_compute,metadata

...
verbose = True

...

 

 

[database]
...
connection = mysql://nova:NOVA_DBPASS@controller/nova

...
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS

...

 

 

[keystone_authtoken]
...
auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = NOVA_PASS

 

 

[vnc]
...
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip

...

 

[glance]
...
host = controller

 

[oslo_concurrency]
...
lock_path = /var/lib/nova/tmp

 

NOVA database의 table들을 만든다.

[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova 

 

nova 관련 서비스들 시작 및 부팅시 시작 서비스에 등록

[root@controller ~]# systemctl enable openstack-nova-api.service \
   openstack-nova-cert.service openstack-nova-consoleauth.service \
   openstack-nova-scheduler.service openstack-nova-conductor.service \
   openstack-nova-novncproxy.service

Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-api.service to /usr/lib/systemd/system/openstack-nova-api.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-cert.service to /usr/lib/systemd/system/openstack-nova-cert.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-consoleauth.service to /usr/lib/systemd/system/openstack-nova-consoleauth.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-scheduler.service to /usr/lib/systemd/system/openstack-nova-scheduler.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-conductor.service to /usr/lib/systemd/system/openstack-nova-conductor.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-novncproxy.service to /usr/lib/systemd/system/openstack-nova-novncproxy.service.
[root@controller ~]# systemctl start openstack-nova-api.service \
   openstack-nova-cert.service openstack-nova-consoleauth.service \
   openstack-nova-scheduler.service openstack-nova-conductor.service \
   openstack-nova-novncproxy.service

[root@controller ~]#

 

 

 

다음은 compute1 노드 설정

우선 compute1 노드용 nova 패키지 설치

[root@compute1 ~]# yum install -y openstack-nova-compute sysfsutils
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: ftp.neowiz.com
 * extras: ftp.neowiz.com
 * updates: ftp.neowiz.com

...(중략)...

  yajl.x86_64 0:2.0.4-4.el7
  yum-utils.noarch 0:1.1.31-34.el7

Complete!
[root@compute1 ~]# 

 

/etc/nova/nova.conf 파일을 아래와 같이 수정한다.

[DEFAULT] 와 [oslo_messaging_rabbit]에 Message queue 서비스에 설정했던 내용으로 수정, RABBIT_PASS는 각자 수정한 패스워드로..

[DEFAULT] 와 [keystone_authtoken] 섹션을 Identity 서비스에 설정했던 내용으로 수정, NOVA_PASS는 각자 수정한 패스워드로..

[DEFAULT] 섹션에 my_ip에 compute1 노드의 아이피를 설정한다.

[DEFAULT] 섹션에 Networking 서비스를 지원할수 있게 설정해준다.

[vnc] 섹션에 vnc 설정을 해준다.

[glance] 섹션의 host에 glance가 설치된 컨트롤러 노드를 지정한다.

[oslo_concurrency]섹션의 lock_path를 설정한다.

(옵션)[DEFAULT] 섹션에서 트러블슈팅시 도움이 될수 있게 verbose를 활성화한다.

 

다만 살펴보면 controller의 nova.conf 와 거의 같다.

다음과 같이 controller 노드의 파일을 가져와 아래 처럼 수정해준다.

 

[root@compute1 ~]# scp controller:/etc/nova/nova.conf /etc/nova/nova.conf
The authenticity of host 'controller (10.0.0.11)' can't be established.
ECDSA key fingerprint is 99:e1:15:2b:05:9c:89:6b:1a:63:1d:e6:0e:7a:09:6e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'controller,10.0.0.11' (ECDSA) to the list of known hosts.
nova.conf      

[root@compute1 ~]# vi /etc/nova/nova.conf

[DEFAULT]
...
rpc_backend = rabbit

...

auth_strategy=keystone

...
my_ip = 10.0.0.31

...

network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver

...
#enabled_apis=osapi_compute,metadata

...
verbose = True

...

 

 

[database]
...
#connection = mysql://nova:NOVA_DBPASS@controller/nova

...

 

[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS

...

 

 

[keystone_authtoken]
...
auth_uri = http://controller:5000


auth_url = http://controller:35357

auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = NOVA_PASS

 

[vnc]
...

enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html

 

[glance]
...
host = controller

 

[oslo_concurrency]
...
lock_path = /var/lib/nova/tmp

 

마지막 설정을 하기전에 compute 노드가 가상머신이 hardware acceleration을 지원하는지 확인한다.

아래의 명령어를 수행하여 나오는 숫자가 hardware acceleration를 지원하는 Core 갯수이며 0이면 hardware acceleration를 지원하지 않는다.

 

[root@compute1 ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
6

[root@compute1 ~]#

 

이때 숫자가 1이상의 숫자라면 더 이상 이상 수정할것이 없고 숫자가 0이라면 /etc/nova/nova.conf 파일을

다시 수정하여 [libvirt] 섹션에서 virt_type 을 kvm 대신에 qemu로 변경한다.

[root@compute1 ~]# vi /etc/nova/nova.conf

[libvirt]
...
virt_type = qemu 

 

compute 서비스 등록 및 시작

[root@compute1 ~]# systemctl enable libvirtd.service openstack-nova-compute.service
Created symlink from /etc/systemd/system/multi-user.target.wants/libvirtd.service to /usr/lib/systemd/system/libvirtd.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service to /usr/lib/systemd/system/openstack-nova-compute.service.
[root@compute1 ~]# systemctl start libvirtd.service openstack-nova-compute.service

 

 

 

 

설치확인 - controller 노드로 다시 가서 수행한다.

서비스가 잘 설치되었는지 확인힌다.

[root@controller ~]# nova service-list
[root@controller ~]# nova service-list
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host       | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-conductor   | controller | internal | enabled | up    | 2016-03-02T22:05:51.000000 | -               |
| 2  | nova-cert        | controller | internal | enabled | up    | 2016-03-02T22:05:52.000000 | -               |
| 3  | nova-consoleauth | controller | internal | enabled | up    | 2016-03-02T22:05:42.000000 | -               |
| 4  | nova-scheduler   | controller | internal | enabled | up    | 2016-03-02T22:05:42.000000 | -               |

| 5  | nova-compute     | compute1   | nova     | enabled | up    | 2016-03-02T22:05:43.000000 | -               |

+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+

 

 

엔드포인트를 확인힌다. 이때 warning 메세지는 무시한다.

[root@controller ~]# nova endpoints
WARNING: keystone has no endpoint in ! Available endpoints for this service:
+-----------+----------------------------------+
| keystone  | Value                            |
+-----------+----------------------------------+
| id        | 37a64289ffee4d7f9cf7e9b774ab2b2b |
| interface | public                           |
| region    | RegionOne                        |
| region_id | RegionOne                        |
| url       | http://controller:5000/v2.0      |
+-----------+----------------------------------+
+-----------+----------------------------------+
| keystone  | Value                            |
+-----------+----------------------------------+
| id        | 95d4152f4c464b36ab887c737c02170c |
| interface | internal                         |
| region    | RegionOne                        |
| region_id | RegionOne                        |
| url       | http://controller:5000/v2.0      |
+-----------+----------------------------------+
+-----------+----------------------------------+
| keystone  | Value                            |
+-----------+----------------------------------+
| id        | fb68494248754d98b454fa5dbd193917 |
| interface | admin                            |
| region    | RegionOne                        |
| region_id | RegionOne                        |
| url       | http://controller:35357/v2.0     |
+-----------+----------------------------------+
WARNING: glance has no endpoint in ! Available endpoints for this service:
+-----------+----------------------------------+
| glance    | Value                            |
+-----------+----------------------------------+
| id        | 3163e3d3732a4ce2b05246a34ddaeb2e |
| interface | internal                         |
| region    | RegionOne                        |
| region_id | RegionOne                        |
| url       | http://controller:9292           |
+-----------+----------------------------------+
+-----------+----------------------------------+
| glance    | Value                            |
+-----------+----------------------------------+
| id        | a6eab47cba534e3d84269ea111f8c5ea |
| interface | admin                            |
| region    | RegionOne                        |
| region_id | RegionOne                        |
| url       | http://controller:9292           |
+-----------+----------------------------------+
+-----------+----------------------------------+
| glance    | Value                            |
+-----------+----------------------------------+
| id        | d54bdb0434294a8b867fdbad08dec3ae |
| interface | public                           |
| region    | RegionOne                        |
| region_id | RegionOne                        |
| url       | http://controller:9292           |
+-----------+----------------------------------+
WARNING: nova has no endpoint in ! Available endpoints for this service:
+-----------+------------------------------------------------------------+
| nova      | Value                                                      |
+-----------+------------------------------------------------------------+
| id        | a3ea42ba57ba472689fbb9639062ed7c                           |
| interface | internal                                                   |
| region    | RegionOne                                                  |
| region_id | RegionOne                                                  |
| url       | http://controller:8774/v2/94f9c25aaa4246b0915afacca2d65c22 |
+-----------+------------------------------------------------------------+
+-----------+------------------------------------------------------------+
| nova      | Value                                                      |
+-----------+------------------------------------------------------------+
| id        | f5482c450e664a82b34f96e798e975e2                           |
| interface | admin                                                      |
| region    | RegionOne                                                  |
| region_id | RegionOne                                                  |
| url       | http://controller:8774/v2/94f9c25aaa4246b0915afacca2d65c22 |
+-----------+------------------------------------------------------------+
+-----------+------------------------------------------------------------+
| nova      | Value                                                      |
+-----------+------------------------------------------------------------+
| id        | f56a530908a04d14915dd872c1773da1                           |
| interface | public                                                     |
| region    | RegionOne                                                  |
| region_id | RegionOne                                                  |
| url       | http://controller:8774/v2/94f9c25aaa4246b0915afacca2d65c22 |
+-----------+------------------------------------------------------------+

 

glance 서비스 설치시 업데이트한 이미지를 확인한다.

[root@controller ~]# nova image-list
+--------------------------------------+--------+--------+--------+
| ID                                   | Name   | Status | Server |
+--------------------------------------+--------+--------+--------+
| 49338c63-033c-40a3-abdd-d6410799de24 | cirros | ACTIVE |        |
+--------------------------------------+--------+--------+--------+

 

 

 

 

가상 서버들의 OS 이미지나 디스크들을 관리하는 서비스로 코드명은 glance이다.

glance 설치전에 먼저 환경 설정부터 하며 glance 환경설정 및 설치도 controller 노드에서만 수행한다.

우선 glance DB를 만들고 glance 계정 생성 및 접근권한을 부여한다. 계정의 패스워드는 GLANCE_DBPASS 부분을 변경해주면 된다.

 

[root@controller ~]# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 43
Server version: 5.5.44-MariaDB MariaDB Server

 

Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.

 

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

 

MariaDB [(none)]> CREATE DATABASE glance;
Query OK, 1 row affected (0.00 sec)

 

MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
    IDENTIFIED BY 'GLANCE_DBPASS';
Query OK, 0 rows affected (0.00 sec)

 

MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'controller' \
    IDENTIFIED BY 'GLANCE_DBPASS';
Query OK, 0 rows affected (0.00 sec)

 

MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
    IDENTIFIED BY 'GLANCE_DBPASS';
Query OK, 0 rows affected (0.00 sec)

 

MariaDB [(none)]> quit
Bye

 

openstack command를 cli에서 원할하게 수행하기 위해 identity에서 만들었던 스크립트를 수행한다.

identity에서 그대로 이어지면 생략해도 됨.

[root@controller ~]# source admin-openrc.sh

 

 

glance 유저를 생성 후 service project 및 admin role을 부여한다.

[root@controller ~]# openstack user create --domain default --password-prompt glance
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | default                          |
| enabled   | True                             |
| id        | a8628bbd61b14dbfa7b4129882da4751 |
| name      | glance                           |
+-----------+----------------------------------+

[root@controller ~]# openstack role add --project service --user glance admin
 

 

glance 서비스 entity를 만든다.

[root@controller ~]# openstack service create --name glance \
   --description "OpenStack Image service" image
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Image service          |
| enabled     | True                             |
| id          | 9dd21e6f93a0419eafef90bc35db9a86 |
| name        | glance                           |
| type        | image                            |
+-------------+----------------------------------+

 

 

identity 서비스와 마찬가지로 public, internal, admin으로 접근시 사용할 endpoint를 생성한다.

[root@controller ~]# openstack endpoint create --region RegionOne \
   image public http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | d54bdb0434294a8b867fdbad08dec3ae |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 9dd21e6f93a0419eafef90bc35db9a86 |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
   image internal http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 3163e3d3732a4ce2b05246a34ddaeb2e |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 9dd21e6f93a0419eafef90bc35db9a86 |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
   image admin http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | a6eab47cba534e3d84269ea111f8c5ea |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 9dd21e6f93a0419eafef90bc35db9a86 |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+

 

 

glance환경 설정이 다 되었으면 glance를 설치한다.

 

[root@controller ~]# yum install -y openstack-glance python-glance python-glanceclient
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: ftp.neowiz.com
 * extras: ftp.neowiz.com
 * updates: ftp.neowiz.com

Package 1:python-glanceclient-1.1.0-1.el7.noarch already installed and latest version
Resolving Dependencies
--> Running transaction check

...(중략)...

  suitesparse.x86_64 0:4.0.2-10.el7
  tbb.x86_64 0:4.1-9.20130314.el7

Complete!
[root@controller ~]#

 

/etc/glance/glance-api.conf 파일을 열어 다음과 같이 변경한다.

[database] 섹션을 찾아 아래와 같이 수정한다. glance DB 패스워드 변경시는 아래 GLANCE_DBPASS을 변경한다.

[keystone_authtoken] 과 [paste_deploy] 섹션을 찾아 identity 서비스에 접근할수 있게 수정한다. glance 패스워드 변경시는 GLANCE_PASS 을 변경한다.

[glance_store] 섹션에서 이미지 파일들이 저장되는 파일시스템을 지정한다. /var/lib/glance/images/ 가 default

[default] 에서 notification_driver를 noop으로 설정

(옵션)[DEFAULT] 섹션에서 트러블슈팅시 도움이 될수 있게 verbose를 활성화한다.

[DEFAULT]
...
notification_driver = noop

...
verbose = True 


 

[database]
...

connection=mysql://glance:GLANCE_DBPASS@controller/glance

 

[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = GLANCE_PASS

[paste_deploy]
...
flavor = keystone

 

[glance_store]
...
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

 

 

 

/etc/glance/glance-registry.conf 파일을 열어 glance_store 섹션 제외하고 위와 동일하게 변경한다.

 

[DEFAULT]
...
notification_driver = noop

 

[database]
...

connection=mysql://glance:GLANCE_DBPASS@controller/glance

 

[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = GLANCE_PASS

[paste_deploy]
...
flavor = keystone

 

glance DB의 테이블을 만든다.

[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance
No handlers could be found for logger "oslo_config.cfg"
[root@controller ~]#

 

glance 서비스 시작 및 부팅시 시작 서비스로 등록

[root@controller ~]# systemctl enable openstack-glance-api.service openstack-glance-registry.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-glance-api.service to /usr/lib/systemd/system/openstack-glance-api.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-glance-registry.service to /usr/lib/systemd/system/openstack-glance-registry.service.
[root@controller ~]# systemctl start openstack-glance-api.service openstack-glance-registry.service

 

 

설치가 끝났으면 확인을 한다.

우선 환경설정 스크립트에 이미지 서비스에 관한 내용을 추가하고 다시 적용한다.

 

[root@controller ~]# echo "export OS_IMAGE_API_VERSION=2" | tee -a admin-openrc.sh demo-openrc.sh
export OS_IMAGE_API_VERSION=2
[root@controller ~]# source admin-openrc.sh

 

이미지 서비스가 잘 작동하는지 테스트를 위해 OS 이미지를 필요함.

가벼운 cirros 이미지로 다운 받는다.

wget이 없다면 yum을 이용해 다운 받는다.

 

[root@controller ~]# wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
--2016-02-27 22:53:50--  http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
Resolving download.cirros-cloud.net (download.cirros-cloud.net)... 69.163.241.114
Connecting to download.cirros-cloud.net (download.cirros-cloud.net)|69.163.241.114|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 13287936 (13M) [text/plain]
Saving to: ‘cirros-0.3.4-x86_64-disk.img.1’

100%[================================================================>] 13,287,936  4.16MB/s   in 3.0s

2016-02-27 22:53:54 (4.16 MB/s) - ‘cirros-0.3.4-x86_64-disk.img.1’ saved [13287936/13287936]

 

 

다운 받은 이미지를 grance에 등록한다.

[root@controller ~]# glance image-create --name "cirros" \
  --file cirros-0.3.4-x86_64-disk.img \
  --disk-format qcow2 --container-format bare \
  --visibility public --progress
[=============================>] 100%
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | ee1eca47dc88f4879d8a229cc70a07c6     |
| container_format | bare                                 |
| created_at       | 2016-03-01T22:09:23Z                 |
| disk_format      | qcow2                                |
| id               | 49338c63-033c-40a3-abdd-d6410799de24 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | cirros                               |
| owner            | 94f9c25aaa4246b0915afacca2d65c22     |
| protected        | False                                |
| size             | 13287936                             |
| status           | active                               |
| tags             | []                                   |
| updated_at       | 2016-03-01T22:09:23Z                 |
| virtual_size     | None                                 |
| visibility       | public                               |
+------------------+--------------------------------------+

 

 

이미지가 잘 등록되었는지 확인한다.

[root@controller ~]# glance image-list
+--------------------------------------+--------+
| ID                                   | Name   |
+--------------------------------------+--------+
| 49338c63-033c-40a3-abdd-d6410799de24 | cirros |
+--------------------------------------+--------+

 

 

 

 

 

+ Recent posts