그동안 오픈스택은 인증을 관리하는데 keystone서비스를 사용해왔다. 다만 kilo와 liberty 버전에 와서

mod_wsgi 및 별도의 웹서버를 이용하여 인증 서비스를 수행하고 있음

따라서 이 가이드도 퍼포먼스를 위해 requests 처리에는 아파치 웹서버, 토큰 저장에는 SQL DB 대신에 Memcached를 사용한다.

또 httpd와 keystone 같은 포트(5000, 35357)를 사용하기 때문에 keystone 서비스는 비활성화 한다.

인증서비스도 컨트롤로 노드에 설치하며 서비스를 설치하기전 환경 설정을 먼저한다.

 

### mairadb의 root 패스워드를 설정한다. 'db_password'에 알맞은 패스워드를 입력 ###

[root@controller ~]# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 3
Server version: 5.5.44-MariaDB MariaDB Server

 

Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.

 

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

 

MariaDB [(none)]> use mysql
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed

MariaDB [mysql]> update user set password=password('db_password') where user='root';
Query OK, 4 rows affected (0.00 sec)
Rows matched: 4  Changed: 4  Warnings: 0

 

MariaDB [mysql]> flush privileges;
Query OK, 0 rows affected (0.00 sec)

 

 

keystione db 생성 후 keystone 계정을 생성하고 권한을 부여한다. 이때 keystone 패스워드를 지정해준다.

공식가이드는 controller 에서 접근시 권한은 빠져있는데 실제 설치시 설정 파일에는

controller통해서 DB접근하게 설정해 놓아 설치중 DB권한 문제로 에러가 발생한다. 아래와 같이 controller에서의 접근도 허용해준다.

 

MariaDB [mysql]> CREATE DATABASE keystone;
Query OK, 1 row affected (0.00 sec)

MariaDB [mysql]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \            
    ->   IDENTIFIED BY 'KEYSTONE_DBPASS';

Query OK, 0 rows affected (0.00 sec)

 

MariaDB [mysql]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'controller' \            
    ->   IDENTIFIED BY 'KEYSTONE_DBPASS';
Query OK, 0 rows affected (0.00 sec)

 

MariaDB [mysql]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    ->   IDENTIFIED BY 'KEYSTONE_DBPASS';
Query OK, 0 rows affected (0.00 sec)


MariaDB [mysql]> quit
Bye

 

관련 패키지 설치 및 구성

[root@controller ~]# yum install -y openstack-keystone httpd mod_wsgi memcached python-memcached
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: ftp.neowiz.com
 * extras: ftp.neowiz.com
 * updates: ftp.neowiz.com
Resolving Dependencies
--> Running transaction check

...(생략)...

  python2-oslo-context.noarch 0:0.6.0-1.el7
  python2-passlib.noarch 0:1.6.5-1.el7
  saslwrapper.x86_64 0:0.16-5.el7

Complete!

 

[root@controller ~]#

### memcached 시작 및 부팅시 시작 활성화 ###

[root@controller ~]# systemctl enable memcached.service
Created symlink from /etc/systemd/system/multi-user.target.wants/memcached.service to /usr/lib/systemd/system/memcached.service.
[root@controller ~]# systemctl start memcached.service

 

### admin token을 설정하기 위해 openssl rand 명령어를 이용 임의 값을 구하고 /etc/keystone/keystone.conf 파일을 수정한다 ###

[root@controller ~]# openssl rand -hex 10
16d3b96cc5759b123842

[root@controller ~]# vi /etc/keystone/keystone.conf

 

### [DEFAULT] 섹션에서 admin_token 항목 및 verbose 항목의 주석을 제거하고 ADMIN_TOKEN  부분에 앞에서 구한 임의 값으루 수정한다. ###

[DEFAULT]

...

#admin_token = ADMIN_TOKEN          

admin_token = 16d3b96cc5759b123842

 

 

### [database] 섹션의 connection 항목을 아래와 같이 수정한다. keystone db패스워드가 다르게 설정되었으면 KEYSTONE_DBPASS을 수정 ###

[database]
...
connection = mysql://keystone:KEYSTONE_DBPASS@controller/keystone

 

### [memcache]  섹션에서 Memcached 서비스 구성. ###


[memcache]
...
servers = localhost:11211

 

### [ token] 섹션에 UUID 토큰 공급자 및 Memcached 드라이버 구성 ###


[token]
...
provider = uuid
driver = memcache

 

### [revoke]  섹션에 sql를 입력한다. ###


[revoke]
...
driver = sql 

 

### (옵션)[DEFAULT] 섹션에서 트러블슈팅시 도움이 될수 있게 verbose를 활성화한다.  ###

 

[DEFAULT]
...
verbose = True

 

 

keystone 테이블을 만든다.

[root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone

 

아파치 웹서버를 구성한다.

### /etc/httpd/conf/httpd.conf 다음과 같이 수정한다. ###

[root@controller ~]# vi /etc/httpd/conf/httpd.conf

....

ServerName controller

 

### 아래의 내용으로 /etc/httpd/conf.d/wsgi-keystone.conf 파일을 생성한다. ###

[root@controller ~]# vi /etc/httpd/conf.d/wsgi-keystone.conf

Listen 5000
Listen 35357

<VirtualHost *:5000>
    WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
    WSGIProcessGroup keystone-public
    WSGIScriptAlias / /usr/bin/keystone-wsgi-public
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    <IfVersion >= 2.4>
      ErrorLogFormat "%{cu}t %M"
    </IfVersion>
    ErrorLog /var/log/httpd/keystone-error.log
    CustomLog /var/log/httpd/keystone-access.log combined

    <Directory /usr/bin>
        <IfVersion >= 2.4>
            Require all granted
        </IfVersion>
        <IfVersion < 2.4>
            Order allow,deny
            Allow from all
        </IfVersion>
    </Directory>
</VirtualHost>

<VirtualHost *:35357>
    WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
    WSGIProcessGroup keystone-admin
    WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    <IfVersion >= 2.4>
      ErrorLogFormat "%{cu}t %M"
    </IfVersion>
    ErrorLog /var/log/httpd/keystone-error.log
    CustomLog /var/log/httpd/keystone-access.log combined

    <Directory /usr/bin>
        <IfVersion >= 2.4>
            Require all granted
        </IfVersion>
        <IfVersion < 2.4>
            Order allow,deny
            Allow from all
        </IfVersion>
    </Directory>
</VirtualHost>

 

 

 

### httpd 시작 및 부팅시 시작 활성화 ###

[root@controller ~]# systemctl enable httpd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.
[root@controller ~]# systemctl start httpd.service


 

 

Identity 서비스는 서비스들과 서비스들의 위치의 카탈로그를 제공하며 오픈스택에 추가하는 서비스들은

이 카테고리에 각각 service entity 와 several API endpoints를 필요로 한다.

기본적으로는 Identity에는 아무것도 설정되어 있지 않아 임시 인증키를 이용 service entity 와 several API endpoints를 설정해야 한다.

 

[root@controller ~]# export OS_TOKEN=16d3b96cc5759b123842

### 위에서 /etc/keystone/keystone.conf 에 설정한 ADMIN_TOKEN 값을 사용 ###

[root@controller ~]# export OS_URL=http://controller:35357/v3
[root@controller ~]# export OS_IDENTITY_API_VERSION=3

 

 

Identity 서비스는 OpenStack 환경에서 서비스들의 카탈로그를 관리하며,

서비스들은 오픈스택 환경에서 다른 서비스들을 사용할수 확인하기 위해 이 카테고리를 사용한다.

일단 Identity 서비스 entity를 생성한다.

[root@controller ~]# openstack service create --name keystone --description "OpenStack Identity" identity
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Identity               |
| enabled     | True                             |
| id          | 2133c5611bc845b7b90f06f6675d41df |
| name        | keystone                         |
| type        | identity                         |
+-------------+----------------------------------+

 

 

오픈스택은 서비스들간에 접근할수 있도록 3가지 API endpoint를 사용한다. 각각 admin 접속시 사용할 admin url, 내부 접속시 사용할 internal url,

외부 접속시 사용할 public url이며, Identity 서비스에 대해서도 각각 생성해준다.

 

[root@controller ~]# openstack endpoint create --region RegionOne identity public http://controller:5000/v2.0
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 37a64289ffee4d7f9cf7e9b774ab2b2b |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 2133c5611bc845b7b90f06f6675d41df |
| service_name | keystone                         |
| service_type | identity                         |
| url          | http://controller:5000/v2.0      |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne identity internal http://controller:5000/v2.0
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 95d4152f4c464b36ab887c737c02170c |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 2133c5611bc845b7b90f06f6675d41df |
| service_name | keystone                         |
| service_type | identity                         |
| url          | http://controller:5000/v2.0      |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne identity admin http://controller:35357/v2.0
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | fb68494248754d98b454fa5dbd193917 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 2133c5611bc845b7b90f06f6675d41df |
| service_name | keystone                         |
| service_type | identity                         |
| url          | http://controller:35357/v2.0     |
+--------------+----------------------------------+

 

다음으론 유저, 프로젝트(테넌트), 롤 을 만든다.

우선 admin 유저부터 프로젝트(테넌트)를 생성

[root@controller ~]# openstack project create --domain default --description "Admin Project" admin
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Admin Project                    |
| domain_id   | default                          |
| enabled     | True                             |
| id          | 94f9c25aaa4246b0915afacca2d65c22 |
| is_domain   | False                            |
| name        | admin                            |
| parent_id   | None                             |
+-------------+----------------------------------+

 

다음으로 유저 생성

[root@controller ~]# openstack user create --domain default --password-prompt admin
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | default                          |
| enabled   | True                             |
| id        | 2238ec4daed3436b8cc97491518bd6cf |
| name      | admin                            |
+-----------+----------------------------------+

 

admin 롤을 생성후 admin 롤에 admin 계정을 추가한다.

[root@controller ~]# openstack role create admin
+-------+----------------------------------+
| Field | Value                            |
+-------+----------------------------------+
| id    | e157ef2ab547476f985d26cf90e536d4 |
| name  | admin                            |
+-------+----------------------------------+
[root@controller ~]# openstack role add --project admin --user admin admin

 

오픈스택을 서비스할 서비스용 프로젝트를 만든다.

[root@controller ~]# openstack project create --domain default --description "Service Project" service
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Service Project                  |
| domain_id   | default                          |
| enabled     | True                             |
| id          | 6d1588e23bc244d397d9617a6cbacbcb |
| is_domain   | False                            |
| name        | service                          |
| parent_id   | None                             |
+-------------+----------------------------------+

 

관리자가 아닌 일반 유저용 demo 프로젝트를 만든다.

[root@controller ~]# openstack project create --domain default --description "Demo Project" demo
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Demo Project                     |
| domain_id   | default                          |
| enabled     | True                             |
| id          | 52b4355c27004f06b2a6c2e9d46da50c |
| is_domain   | False                            |
| name        | demo                             |
| parent_id   | None                             |
+-------------+----------------------------------+

 

demo 유저를 만든다.

[root@controller ~]# openstack user create --domain default --password-prompt demo
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | default                          |
| enabled   | True                             |
| id        | d16b2f206ce848eca21473f1595f3adf |
| name      | demo                             |
+-----------+----------------------------------+

 

일반 user 롤을 만들고 demo 유저를

[root@controller ~]# openstack role create user
+-------+----------------------------------+
| Field | Value                            |
+-------+----------------------------------+
| id    | 2fc4fad2d097472987b977d57434e5cb |
| name  | user                             |
+-------+----------------------------------+
[root@controller ~]# openstack role add --project demo --user demo user

 

프로젝트, 롤, 사용자들을 생성하였으면 잘 생성되었는지 확인하기 전에

보안상의 이유로 위에서 사용한 임시사용 인증키를 사용하지 않아야 한다.

환경변수를 사용하지 않게 unset 시켜준후 /usr/share/keystone/keystone-dist-paste.ini 파일을 편집하여

[pipeline:public_api], [pipeline:admin_api], [pipeline:api_v3] 섹션에서 admin_token_auth 을 삭제한다.

[root@controller ~]# unset OS_TOKEN OS_URL

[root@controller ~]# vi /usr/share/keystone/keystone-dist-paste.ini

... 

[pipeline:public_api]
# The last item in this pipeline must be public_service or an equivalent
# application. It cannot be a filter.
pipeline = sizelimit url_normalize request_id build_auth_context token_auth admin_token_auth json_body ec2_extension user_crud_extension public_service

[pipeline:admin_api]
# The last item in this pipeline must be admin_service or an equivalent
# application. It cannot be a filter.
pipeline = sizelimit url_normalize request_id build_auth_context token_auth admin_token_auth json_body ec2_extension s3_extension crud_extension admin_service

[pipeline:api_v3]
# The last item in this pipeline must be service_v3 or an equivalent
# application. It cannot be a filter.
pipeline = sizelimit url_normalize request_id build_auth_context token_auth admin_token_auth json_body ec2_extension_v3 s3_extension simple_cert_extension revoke_extension federation_extension oauth1_extension endpoint_filter_extension service_v3

 

admin 및 demo 사용자의 인증토큰을 요청한다.

[root@controller ~]# openstack --os-auth-url http://controller:35357/v3 \
   --os-project-domain-id default --os-user-domain-id default \
   --os-project-name admin --os-username admin --os-auth-type password \
   token issue
Password:

+------------+----------------------------------+
| Field      | Value                            |
+------------+----------------------------------+
| expires    | 2016-02-27T13:01:50.341878Z      |
| id         | cca2a4322d99489e94a49d188e62085e |
| project_id | 94f9c25aaa4246b0915afacca2d65c22 |
| user_id    | 2238ec4daed3436b8cc97491518bd6cf |
+------------+----------------------------------+

[root@controller ~]# openstack --os-auth-url http://controller:5000/v3 \
   --os-project-domain-id default --os-user-domain-id default \
   --os-project-name demo --os-username demo --os-auth-type password \
   token issue
Password:
+------------+----------------------------------+
| Field      | Value                            |
+------------+----------------------------------+
| expires    | 2016-02-27T13:04:50.157628Z      |
| id         | 2695f0b63e50423dae7909a7e66f41ac |
| project_id | 52b4355c27004f06b2a6c2e9d46da50c |
| user_id    | d16b2f206ce848eca21473f1595f3adf |
+------------+----------------------------------+

 

이때 위에서 임시로 사용했던 admin_token_auth를 삭제하여 오픈스택을 사용시

위와 같이 admin 및 그외 유저들의 계정정보들을 파라미터로 같이 입력해줘야 한다.(미입력시 error)

따라서 인증내용을 환경변수로 만드는 스크립스틀 생성하여 사용하면 편히하게 운영가능함.

admin 용과 demo 유저용 스크립트를 다음과 같이 각각 생성한다. 이때 OS_PASSWORD 항목은 계정 생성시 사용한 패스워드로 변경!

[root@controller ~]# vi admin-openrc.sh

export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3

 

[root@controller ~]# vi demo-openrc.sh
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=demo
export OS_TENANT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=DEMO_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3

 

스크립트를 생성후 source admin-openrc.sh 명령어를 입력하여 파라미터를 생략할수 있다.

다음은 스크립트 실행 전후의 command 실행 결과 이다.

 

[root@controller ~]# openstack token issue
Missing parameter(s):
Set a username with --os-username, OS_USERNAME, or auth.username
Set an authentication URL, with --os-auth-url, OS_AUTH_URL or auth.auth_url
Set a scope, such as a project or domain, set a project scope with --os-project-name, OS_PROJECT_NAME or auth.project_name, set a domain scope with --os-domain-name, OS_DOMAIN_NAME or auth.domain_name

[root@controller ~]# openstack user list
Missing parameter(s):
Set a username with --os-username, OS_USERNAME, or auth.username
Set an authentication URL, with --os-auth-url, OS_AUTH_URL or auth.auth_url
Set a scope, such as a project or domain, set a project scope with --os-project-name, OS_PROJECT_NAME or auth.project_name, set a domain scope with --os-domain-name, OS_DOMAIN_NAME or auth.domain_name

[root@controller ~]# source admin-openrc.sh
[root@controller ~]# openstack token issue
+------------+----------------------------------+
| Field      | Value                            |
+------------+----------------------------------+
| expires    | 2016-02-27T13:21:10.984010Z      |
| id         | 96031445b5554e58ae6f1409a1b80388 |
| project_id | 94f9c25aaa4246b0915afacca2d65c22 |
| user_id    | 2238ec4daed3436b8cc97491518bd6cf |
+------------+----------------------------------+
[root@controller ~]# openstack user list
+----------------------------------+-------+
| ID                               | Name  |
+----------------------------------+-------+
| 2238ec4daed3436b8cc97491518bd6cf | admin |
| d16b2f206ce848eca21473f1595f3adf | demo  |
+----------------------------------+-------+


 


 

대부분의 오픈스택의 서비스 정보를 저장하기 위해 SQL DB를 이용하며

현재 설치될 SQL DB는 기존 가이드에 따라 mariaDB로 진행되지만 PostgreSQL등 다른 SQL DB도 지원 가능함

DB는 controller노드에만 설치함.

 

[root@controller ~]# yum install -y mariadb mariadb-server MySQL-python

Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: ftp.neowiz.com
 * extras: ftp.neowiz.com
 * updates: ftp.neowiz.com
Resolving Dependencies
--> Running transaction check
---> Package MySQL-python.x86_64 0:1.2.3-11.el7 will be installed
---> Package mariadb.x86_64 1:5.5.44-2.el7.centos will be installed
...(생략)...


Complete!
[root@controller ~]# vi /etc/my.cnf.d/mariadb_openstack.cnf

### /etc/my.cnf.d/mariadb_openstack.cnf 파일을 만들고 아래의 내용으로 수정한다.

[mysqld]
bind-address = 10.0.0.11
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8

 

### mariaDB 시작 및 부팅시 자동 실행될 서비스로 등록 및 시작  ###

[root@controller my.cnf.d]# systemctl enable mariadb.service
Created symlink from /etc/systemd/system/multi-user.target.wants/mariadb.service to /usr/lib/systemd/system/mariadb.service.
[root@controller my.cnf.d]# systemctl start mariadb.service

 

 

Telemetry 서비스는 정보를 저장하기 위해 NoSQL database를 사용하며 역시 컨트롤로 노드에만 설치한다. 가이드에 따라  MongoDB를 설치함

 

[root@controller ~]# yum install -y mongodb-server mongodb
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: ftp.neowiz.com
 * extras: ftp.neowiz.com
 * updates: ftp.neowiz.com
Resolving Dependencies
--> Running transaction check
...(생략)...
  libunwind.x86_64 2:1.1-5.el7_2.2
  v8.x86_64 1:3.14.5.10-14.el7
  yaml-cpp.x86_64 0:0.5.1-6.el7

Complete!

[root@controller ~]#

 

### /etc/mongod.conf 파일을 아래와 같이 bind_ip에 controller ip를 smallfiles 항목에서 주석을 제거한다. ###

[root@controller ~]#

 

bind_ip = 10.0.0.11

 

smallfiles = true

[root@controller ~]#

 

###mongodb 시작 및 부팅시 자동 실행될 서비스로 등록 및 시작  ###
[root@controller ~]# systemctl enable mongod.service
Created symlink from /etc/systemd/system/multi-user.target.wants/mongod.service to /usr/lib/systemd/system/mongod.service.
[root@controller ~]# systemctl start mongod.service

 

오픈스택은 서비스 중 작업을 조정하거나 상태 정보를 위해 message queue를 사용한다. 오픈스택은 RabbitMQ, ZeroMQ, Qpid의 여러 메세지 큐를 지원하며

지금은 가이드에 따라 RabbitMQ를 설치함.. 메세지 큐 또한 컨트롤로 노드에만 설치한다.

[root@controller ~]# yum install -y rabbitmq-server
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: ftp.neowiz.com
 * extras: ftp.neowiz.com
 * updates: ftp.neowiz.com
Resolving Dependencies
--> Running transaction check
---> Package rabbitmq-server.noarch 0:3.3.5-6.el7 will be installed

...(생략)...

  lksctp-tools.x86_64 0:1.0.13-3.el7

Complete!

[root@controller ~]#

 

### RabbitMQ시작 및 부팅시 자동 실행될 서비스로 등록 및 시작  ###
[root@controller ~]# systemctl enable rabbitmq-server.service
Created symlink from /etc/systemd/system/multi-user.target.wants/rabbitmq-server.service to /usr/lib/systemd/system/rabbitmq-server.service.
[root@controller ~]# systemctl start rabbitmq-server.service

 

### Openstack 사용자 추가 및 권한 설정 - 패스워드를 다르게 설정할 경우 RABBIT_PASS 항목을 수정한다. ###

[root@controller ~]# rabbitmqctl add_user openstack RABBIT_PASS
Creating user "openstack" ...
...done.
[root@controller ~]#

[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
...done.
[root@controller ~]#

NTP 설정이 끝나면 본격적으로 openstack packages 설치를 시작한다.

아래 내용은 모든 노드들에 공통으로 수행한다.

[root@controller ~]# yum install -y centos-release-openstack-liberty

Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: ftp.neowiz.com
 * extras: ftp.neowiz.com
 * updates: ftp.neowiz.com
Resolving Dependencies
--> Running transaction check
---> Package centos-release-openstack-liberty.noarch 0:1-4.el7 will be installed
--> Finished Dependency Resolution

...(생략)...


  Installing : centos-release-openstack-liberty-1-4.el7.noarch                                         1/1
  Verifying  : centos-release-openstack-liberty-1-4.el7.noarch                                         1/1

Installed:
  centos-release-openstack-liberty.noarch 0:1-4.el7

Complete!

 

[root@controller ~]# yum install -y https://rdoproject.org/repos/openstack-liberty/rdo-release-liberty.rpm

Complete!

...(생략)...

 

### 이후 upgrage를 해주며 이때 커널에 대한 업그레이드 내용이 있을 경우 재부팅을 해준다. ###
 [root@controller ~]# yum -y upgrade
...(생략)...

Complete!
[root@controller ~]# reboot

 

### 오픈스택 클라이언트 설치 ###

[root@controller ~]# yum install -y python-openstackclient

 

### centos에 기본으로 설치된 selinux 대신 openstack 서비스에 관해 자동으로 보안정책을 관리할 openstack-selinux 설치 ###

[root@controller ~]# yum install -y openstack-selinux

 

### 다른 노드들도 같은 작업을 수행한다 ###

 

 

 

 

노드별로 접근이 가능하면 본격적인 설치전에 사전작업을 한다.

일단 hostname 및 ip hosts 설정을 한다.

ip 설정은 management network쪽도 원래 설정해야 하지만 해당 가이드에선 host 시스템에서 dhcp로 설정해주었으니 생략하고

public network쪽만 설정하겠음 public network는 아키텍처에서 보이는 것 처럼 컨트롤러 노드와 컴퓨트 노드만 설정함

추가로 원활한 설정을 위해 모든 노드의 방화벽을 중지 및 부팅시 시작 서비스에서 제외시킨다.

 

 

### controller 노드 ###

[root@localhost ~]# ssh 10.0.0.11
Last login: *** *** ** **:**:** 2016

[root@localhost ~]# hostnamectl set-hostname controller

[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1

TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=none

 

[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

[root@localhost ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

[root@localhost ~]# exit

logout
Connection to 10.0.0.11 closed.

 

### compute 노드 ###

[root@localhost ~]# ssh 10.0.0.31
Last login: *** *** ** **:**:** 2016
[root@localhost ~]# hostnamectl set-hostname compute

[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=none

[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

[root@localhost ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

[root@localhost ~]# exit

logout
Connection to 10.0.0.31 closed.

 

### block1 노드 ###

[root@localhost ~]# ssh 10.0.0.41
Last login: *** *** ** **:**:** 2016
[root@localhost ~]# hostnamectl set-hostname block1

[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

[root@localhost ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

[root@localhost ~]# exit

logout
Connection to 10.0.0.41 closed.

 

### object1 노드 ###

[root@localhost ~]# ssh 10.0.0.51
Last login: *** *** ** **:**:** 2016
[root@localhost ~]# hostnamectl set-hostname object1

[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

[root@localhost ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

[root@localhost ~]# exit

logout
Connection to 10.0.0.51 closed.

 

### object2 노드 ###

[root@localhost ~]# ssh 10.0.0.52
Last login: *** *** ** **:**:** 2016
[root@localhost ~]# hostnamectl set-hostname object2

[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

[root@localhost ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

[root@localhost ~]# exit

logout
Connection to 10.0.0.51 closed.

 

 

[root@localhost ~]# vi /etc/hosts

### 아래의 내용 추가 후 저장###

10.0.0.11       controller
10.0.0.31       compute
10.0.0.41       block1
10.0.0.51       object1
10.0.0.52       object2

 

[root@localhost ~]# scp /etc/hosts controller:/etc/hosts
The authenticity of host 'controller (10.0.0.11)' can't be established.
ECDSA key fingerprint is **:**:**:**:**:**:**:**:**:**:**:**:**:**:**:**:.

Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'controller' (ECDSA) to the list of known hosts.
hosts                                         100%  252     0.3KB/s   00:00
[root@localhost ~]# scp /etc/hosts compute:/etc/hosts
The authenticity of host 'compute (10.0.0.31)' can't be established.
ECDSA key fingerprint is **:**:**:**:**:**:**:**:**:**:**:**:**:**:**:**:.

Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'compute,10.0.0.31' (ECDSA) to the list of known hosts.
hosts                                         100%  252     0.3KB/s   00:00
[root@localhost ~]# scp /etc/hosts block1:/etc/hosts
The authenticity of host 'block1 (10.0.0.41)' can't be established.
ECDSA key fingerprint is **:**:**:**:**:**:**:**:**:**:**:**:**:**:**:**:.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'block1,10.0.0.41' (ECDSA) to the list of known hosts.
hosts                                         100%  252     0.3KB/s   00:00
[root@localhost ~]# scp /etc/hosts object1:/etc/hosts
The authenticity of host 'object1 (10.0.0.51)' can't be established.
ECDSA key fingerprint is **:**:**:**:**:**:**:**:**:**:**:**:**:**:**:**:.

Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'object1,10.0.0.51' (ECDSA) to the list of known hosts.
hosts                                         100%  252     0.3KB/s   00:00
[root@localhost ~]# scp /etc/hosts object2:/etc/hosts
The authenticity of host 'object2 (10.0.0.52)' can't be established.
ECDSA key fingerprint is **:**:**:**:**:**:**:**:**:**:**:**:**:**:**:**:.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'object2,10.0.0.52' (ECDSA) to the list of known hosts.
hosts                                         100%  252     0.3KB/s   00:00

 

 

 

 

NTP 설정

- 우선 controller 노드부터

[root@controller ~]# yum install chrony -y
Loaded plugins: fastestmirror
Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast
base                                                     | 3.6 kB     00:00
extras                                                   | 3.4 kB     00:00
updates                                                  | 3.4 kB     00:00
(1/2): extras/7/x86_64/primary_db                          | 101 kB   00:00
(2/2): updates/7/x86_64/primary_db                         | 3.1 MB   00:00

...(중략)...

Installed:
  chrony.x86_64 0:2.1.1-1.el7.centos

Complete!
[root@controller ~]# vi /etc/chrony.conf

### 아래의 내용처럼 추가 ###

server 10.0.0.1 iburst              # 외부 NTP 서버의 IP를 설정, 기존의 "server 1.centos.pool.ntp.org iburst"등은 주석 처리 또는 삭제

 

allow 10.0.0.0/24                   # 다른 노드들이 접근을 허용할수 있게 해당 내용 추가

 

### 서비스 등록 및 활성화 ###

[root@controller ~]# systemctl enable chronyd.service
[root@controller ~]# systemctl start chronyd.service

### sync가 잘되었는지 확인 ###

[root@controller ~]# chronyc sources                 
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^? 10.0.0.1                      0   8     0   10y     +0ns[   +0ns] +/-    0ns

 

- controller 외의 노드들은 공통

[root@compute ~]# yum install chrony -y
Loaded plugins: fastestmirror
Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast

...(중략)...

Installed:
  chrony.x86_64 0:2.1.1-1.el7.centos

Complete!
 [root@compute ~]# vi /etc/chrony.conf

### 아래의 내용처럼 추가 ###

server controller iburst           # controller 노드의 시간과 sync 시킨다. 기존의 "server 1.centos.pool.ntp.org iburst"등은 주석 처리 또는 삭제

 

### 서비스 등록 및 활성화 ###

[root@compute ~]# systemctl enable chronyd.service
[root@compute ~]# systemctl start chronyd.service

### sync가 잘되었는지 확인 ###

[root@compute ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^? controller                    0   8     0   10y     +0ns[   +0ns] +/-    0ns 


openstack liberty 버전

 

하드웨어 요구사항 및 네트워크 구성

 

 

 

네트워크 구성도

 

이를 기반으로 실제 시스템 구성

피지컬 서버 사양

CPU - Intel Xeon e5-2603v3 6core

RAM - 32GB dd4

DISK - sata 3 disk 1TB

          ssd 120GB * 2ea(raid 0 구성 으로 OS 영역시 저장될 곳)

OS centos 7.2

 

일단 가상머신 설치를 위해 centos 7 버전을 설치

 

[root@localhost home]# wget http://ftp.daumkakao.com/centos/7/isos/x86_64/CentOS-7-x86_64-DVD-1511.iso
--2016-01-29 23:47:34--  http://ftp.daumkakao.com/centos/7/isos/x86_64/CentOS-7-x86_64-DVD-1511.iso
Resolving ftp.daumkakao.com (ftp.daumkakao.com)... 103.246.57.108
Connecting to ftp.daumkakao.com (ftp.daumkakao.com)|103.246.57.108|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4329570304 (4.0G) [application/x-iso9660-image]
Saving to: ‘CentOS-7-x86_64-DVD-1511.iso’

94% [====================================================================================================================>        ] 4,073,544,588 9.65MB/s   in 15m 0s

2016-01-30 00:02:39 (4.32 MB/s) - Connection closed at byte 4073544588. Retrying.

--2016-01-30 00:02:40--  (try: 2)  http://ftp.daumkakao.com/centos/7/isos/x86_64/CentOS-7-x86_64-DVD-1511.iso
Connecting to ftp.daumkakao.com (ftp.daumkakao.com)|103.246.57.108|:80... connected.
HTTP request sent, awaiting response... 206 Partial Content
Length: 4329570304 (4.0G), 256025716 (244M) remaining [application/x-iso9660-image]
Saving to: ‘CentOS-7-x86_64-DVD-1511.iso’

100%[+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++=======>] 4,329,570,304 5.29MB/s   in 33s

2016-01-30 00:03:13 (7.31 MB/s) - ‘CentOS-7-x86_64-DVD-1511.iso’ saved [4329570304/4329570304]
[root@localhost home]#

 

우선 Management network 생성하기 위해 vi로 아래와 같은 파일을 생성 및 define

[root@localhost~]# cd /etc/libvirt/qemu/networks

[root@localhost networks]# vi managenetwork.xml

<network>
  <name>managenetwork</name>
  <forward dev='br0' mode='nat'>         
    <interface dev='br0'/>                        // 인터넷에 연결된 brige 인터페이스를 입력                         
  </forward>
  <bridge name='virbr1' stp='on' delay='0'/>      // virbr0이 없으면 0으로 생성해도 무방
  <domain name='managenetwork'/>
  <ip address='10.0.0.1' netmask='255.255.255.0'>

  </ip>
</network>

[root@localhost networks]# virsh net-define /etc/libvirt/qemu/networks/managenetwork.xml
Network managenetwork defined from /etc/libvirt/qemu/networks/managenetwork.xml

[root@localhost networks]#

[root@localhost networks]# virsh net-list --all
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 managenetwork        inactive   no            yes
 storagenetwork       active     yes           yes

[root@localhost networks]# virsh net-start managenetwork
Network managenetwork started

[root@localhost networks]# virsh net-autostart managenetwork
Network managenetwork marked as autostarted

[root@localhost networks]# virsh net-list --all
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 managenetwork        active     yes           yes
 storagenetwork       active     yes           yes

[root@localhost networks]#

 

네트워크 생성후 가상머신을 임시로 Minimul 버전으로 설치후 (과정 생략 - 다른건 Default지만

네트워크 디바이스는 2개 연결해주며 eth0은 managenetwork에 연결 eth1은 br0에 연결)

VM들의 Base Image로 사용할 OS 편집을 위해 DHCP IP 할당을 위해 가상머신의 mac address 확인

 

[root@localhost~]# virsh dumpxml centos7.0 | grep 'mac address'
      <mac address='52:54:00:4a:8d:f2'/>   // managenetwork NAT 네트워크에 연결된 eth0
      <mac address='52:54:00:67:ff:88'/>    // host의 br0에 연결된 eth1

 

IP를 확인후 managenetwork 수정 => DHCP항목에 mac주소및 할당 될 IP 설정

[root@localhost ~]# virsh net-list
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 managenetwork        active     yes           yes     

 

[root@localhost ~]# virsh net-edit managenetwork

...

중략

...

    <dhcp>
      <range start='10.0.0.2' end='10.0.0.254'/>
      <host mac='52:54:00:4a:8d:f2' ip='10.0.0.5'/>

      <host mac='52:54:00:4a:8d:11' ip='10.0.0.11'/>     // 가상머신 생성시 정해진 mac IP로 변경하여 DHCP로 IP 할당
      <host mac='52:54:00:4a:8d:31' ip='10.0.0.31'/>
      <host mac='52:54:00:4a:8d:41' ip='10.0.0.41'/>
      <host mac='52:54:00:4a:8d:51' ip='10.0.0.51'/>
      <host mac='52:54:00:4a:8d:52' ip='10.0.0.52'/>


    </dhcp>

 

VM 종료 및 네트워크 재설 후 가상머신을 다시 시작 후 연결상태 확인

 

[root@localhost ~]# virsh shutdown centos7.0
Domain centos7.0 is being shutdown

 

[root@localhost ~]# virsh net-destroy managenetwork
Network managenetwork destroyed

 

[root@localhost ~]# virsh net-start managenetwork
Network managenetwork started

 

[root@localhost ~]# virsh start centos7.0
Domain centos7.0 started

 

[root@localhost ~]# virsh domifaddr centos7.0
 Name       MAC address          Protocol     Address
-------------------------------------------------------------------------------
 vnet0      52:54:00:4a:8d:f2    ipv4         10.0.0.5/24

[root@localhost ~]#

 

기본 baseimage를 가진 가상머신에 접근하여 베이스 이미지를 수정한다.

[root@localhost ~]# ssh 10.0.0.5                                      
The authenticity of host '10.0.0.5 (10.0.0.5)' can't be established.
ECDSA key fingerprint is 99:e1:15:2b:05:9c:89:6b:1a:63:1d:e6:0e:7a:09:6e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.0.5' (ECDSA) to the list of known hosts.
root@10.0.0.5's password:
Last login: Sat Jan 30 02:27:11 2016 from 10.0.0.1

 

[root@localhost ~]# yum -y update

Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: ftp.kaist.ac.kr
 * extras: ftp.kaist.ac.kr
 * updates: ftp.kaist.ac.kr
...

중략

...

 

같은 이미지를 사용하는 가상머신들끼리 접근이 편하게끔

ssh-keygen을 이용 ssh-key를 생성후 자신의 public키를 authorized_keys로 복사해준 후 시스템 종료

 

[root@localhost ~]#

[root@localhost ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx root@localhost.managenetwork
The key's randomart image is:
+--[ RSA 2048]----+
|        .. .  .o.|
|         .o   +o+|
|          ..  .Bo|
|         ..  .+oo|
|        S  ..  .+|
|            .. oE|
|           .  +  |
|          . .. . |
|           . .o..|
+-----------------+
[root@localhost ~]# cd .ssh

[root@localhost ~]# cp id_rsa.pub authorized_keys

[root@localhost ~]# shutdown -h now


 가상머신들이 사용할 이미지를 원본이미지를 통한 backing 파일로 생성 

 

[root@localhost ~]# virsh domblklist centos7.0
Target     Source
------------------------------------------------
vda        /mnt/osvg/vol/ospool/linux.qcow2

 

[root@localhost ~]# cd /mnt/osvg/vol/ospool

[root@localhost ospool]# qemu-img create -b ./linux.qcow2 -f qcow2 ./controller.qcow2
Formatting './controller.qcow2', fmt=qcow2 size=107374182400 backing_file='./linuxbase.qcow2' encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
[root@localhost ospool]# qemu-img create -b ./linux.qcow2 -f qcow2 ./compute.qcow2
Formatting './compute.qcow2', fmt=qcow2 size=107374182400 backing_file='./linuxbase.qcow2' encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
[root@localhost ospool]# qemu-img create -b ./linux.qcow2 -f qcow2 ./block1.qcow2
Formatting './block1.qcow2', fmt=qcow2 size=107374182400 backing_file='./linuxbase.qcow2' encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
[root@localhost ospool]# qemu-img create -b ./linux.qcow2 -f qcow2 ./object1.qcow2
Formatting './object1.qcow2', fmt=qcow2 size=107374182400 backing_file='./linuxbase.qcow2' encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
[root@localhost ospool]# qemu-img create -b ./linux.qcow2 -f qcow2 ./object2.qcow2
Formatting './object2.qcow2', fmt=qcow2 size=107374182400 backing_file='./linuxbase.qcow2' encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16

 

[root@localhost ospool]# chown qemu:qemu *.qcow2
[root@localhost ospool]# ls -alrt
total xxxxxxx

drwxr-xr-x 4 nfsnobody root         4096 Jan 23 20:16 ..
-rw-r--r-- 1 qemu      qemu   3673030656 Jan 25 07:30 xxxxx.qcow2
-rw-r--r-- 1 qemu      qemu   4676124672 Jan 25 07:30 xxxxx.qcow2
-rw-r--r-- 1 qemu      qemu   3686268928 Jan 28 10:21 xxxxx.qcow2
-rw-r--r-- 1 qemu      qemu   1436483584 Jan 30 14:57 linux.qcow2
-rw-r--r-- 1 qemu      qemu 107390894592 Feb  1 05:19 xxxxx.qcow2
-rw-r--r-- 1 qemu      qemu   4346937344 Feb 10 05:44 xxxxx.qcow2
-rw-r--r-- 1 qemu      qemu   4289134592 Feb 16 09:27 xxxxx.qcow2
-rw-r--r-- 1 qemu      qemu       198656 Feb 17 05:37 controller.qcow2
-rw-r--r-- 1 qemu      qemu       198656 Feb 17 05:37 compute.qcow2
-rw-r--r-- 1 qemu      qemu       198656 Feb 17 05:37 block1.qcow2
-rw-r--r-- 1 qemu      qemu       198656 Feb 17 05:37 object1.qcow2
drwxr-xr-x 2 root      root         4096 Feb 17 05:38 .
-rw-r--r-- 1 qemu      qemu       198656 Feb 17 05:38 object2.qcow2

 

 

가상머신 생성을 위해 원본 가상머신의 xml 파일을 dump파일로 생성 후 복사 및 수정

이때 가상머신의 uuid와 max address들이 가상머신들끼리 겹치면 안되기 때문에 삭제 또는 적절히 수정해야함

 

[root@localhost ospool]# cd /etc/libvirt/qemu/ 

[root@localhost qemu]# virsh dumpxml generic >> /etc/libvirt/qemu/dump.xml

[root@localhost qemu]# cp /etc/libvirt/qemu/dump.xml /etc/libvirt/qemu/controller.xml
[root@localhost qemu]# cp /etc/libvirt/qemu/dump.xml /etc/libvirt/qemu/compute.xml
[root@localhost qemu]# cp /etc/libvirt/qemu/dump.xml /etc/libvirt/qemu/block1.xml
[root@localhost qemu]# cp /etc/libvirt/qemu/dump.xml /etc/libvirt/qemu/object1.xml
[root@localhost qemu]# cp /etc/libvirt/qemu/dump.xml /etc/libvirt/qemu/object2.xml

[root@localhost qemu]# vim /etc/libvirt/qemu/controller.xml

<domain type='kvm'>
  <name>controller</name>                            // 이름 수정

  <uuid>5e26e120-f046-4c8a-bf56-dbd4b9ba8f3f</uuid>  // 해당 내용은 실제 설정시는 uuid가 겹치면 안되기 때문에 삭제(define 할경우때 자동으로 생성)
  <memory unit='GiB'>4</memory>                      // 메모리도 알아서 수정, 단위를 GiB로 수정후 변경하는게 편함
  <currentMemory unit='GiB'>4</currentMemory>        // 현재 설정에서는 controller 4G, comupte 8G, block1 2G, object 각 4G 할당 예정
  <vcpu placement='static'>2</vcpu>                  // cpu는 각각 2, 6, 2, 2, 2 할당 예정
  ...(생략)...

  <cpu mode='host-passthrough'>                      // 중첩 가상화를 위해 cpu mode를 host-passthrough 또는 host-model로 변경
  ...(생략)...
   <disk type='file' device='disk'>
     <driver name='qemu' type='qcow2'/>
     <source file='/mnt/osvg/vol/ospool/controller.qcow2'/>   // 기존에 생성한 disk img
     <target dev='vda' bus='virtio'/>
     <boot order='1'/>
     <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
   </disk>

...생략...

   <interface type='network'>                          // eth0에 해당 
     <mac address='52:54:00:4a:8d:11'/>                // 위의 management network에서 dhcp로 ip를 할당한 macaddress 설정
     <source network='managenetwork'/>
     <model type='virtio'/>
     <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
   </interface>
   <interface type='bridge'>                           // eth1에 해당, block1, object1, object2 노드는 해당 디바이스 삭제
     <mac address='52:54:00:a6:f8:42'/>                // mac address가 겹치면 안되기 때문에 알아서 수정하거나 삭제해도 무방      
     <source bridge='br0'/>
     <model type='virtio'/>
     <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>

   </interface>

 

compute, bloack1, object1, 2도 각각 설정후 define 및 실행시켜 준다.

 

[root@localhost qemu]# virsh define /etc/libvirt/qemu/controller.xml
Domain controller defined from /etc/libvirt/qemu/controller.xml

 

[root@localhost qemu]# virsh define /etc/libvirt/qemu/compute.xml
Domain compute defined from /etc/libvirt/qemu/compute.xml

 

[root@localhost qemu]# virsh define /etc/libvirt/qemu/block1.xml
Domain block1 defined from /etc/libvirt/qemu/block1.xml

 

[root@localhost qemu]# virsh define /etc/libvirt/qemu/object1.xml
Domain object1 defined from /etc/libvirt/qemu/object1.xml

 

[root@localhost qemu]# virsh define /etc/libvirt/qemu/object2.xml
Domain object2 defined from /etc/libvirt/qemu/object2.xml

 

[root@localhost qemu]# virsh list --all
 Id    Name                           State
----------------------------------------------------

 -     centos7.0                        shut off

 -     compute                        shut off
 -     controller                     shut off
 -     juho                           shut off
 -     object1                        shut off
 -     object2                        shut off

 

[root@localhost qemu]# virsh start controller
Domain controller started

 

[root@localhost qemu]# virsh start compute
Domain compute started

 

[root@localhost qemu]# virsh start block1
Domain block1 started

 

[root@localhost qemu]# virsh start object1
Domain object1 started

 

[root@localhost qemu]# virsh start object2
Domain object2 started

 

[root@localhost qemu]# virsh list --all
 Id    Name                           State
----------------------------------------------------
 4     controller                     running
 5     compute                        running
 6     block1                         running
 7     object1                        running
 8     object2                        running

 -     centos7.0                        shut off

 

[root@localhost qemu]# ping 10.0.0.11
PING 10.0.0.11 (10.0.0.11) 56(84) bytes of data.
64 bytes from 10.0.0.11: icmp_seq=1 ttl=64 time=0.193 ms
64 bytes from 10.0.0.11: icmp_seq=2 ttl=64 time=0.168 ms
^C
--- 10.0.0.11 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.168/0.180/0.193/0.018 ms
[root@localhost ospool]# ping 10.0.0.31
PING 10.0.0.31 (10.0.0.31) 56(84) bytes of data.
64 bytes from 10.0.0.31: icmp_seq=1 ttl=64 time=0.185 ms
^C
--- 10.0.0.31 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms
[root@localhost ospool]# ping 10.0.0.41
PING 10.0.0.41 (10.0.0.41) 56(84) bytes of data.
64 bytes from 10.0.0.41: icmp_seq=1 ttl=64 time=0.164 ms
^C
--- 10.0.0.41 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms
[root@localhost ospool]# ping 10.0.0.51
PING 10.0.0.51 (10.0.0.51) 56(84) bytes of data.
64 bytes from 10.0.0.51: icmp_seq=1 ttl=64 time=0.176 ms
^C
--- 10.0.0.51 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms
[root@localhost ospool]# ping 10.0.0.52
PING 10.0.0.52 (10.0.0.52) 56(84) bytes of data.
64 bytes from 10.0.0.52: icmp_seq=1 ttl=64 time=0.177 ms
64 bytes from 10.0.0.52: icmp_seq=2 ttl=64 time=0.183 ms
^C
--- 10.0.0.52 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.177/0.180/0.183/0.003 ms

 

 

 

 

 

https://kashyapc.fedorapeople.org/virt/lc-2012/snapshots-handout.html


 

 

 

 



 OpenStack

서비스프로젝트 이름기술
Dashboard Horizon 예컨대, 인스턴스를 시작 IP 주소를 할당 및 액세스 제어 구성과 같은 OpenStack은 기본 서비스와 상호 작용하는 웹 기반 셀프 서비스 포털을 제공한다.

Compute

Nova OpenStack은 환경에서 컴퓨팅 인스턴스의 수명주기를 관리합니다. 책임 산란, 예약 및 수요에 가상 머신의 해체를 포함한다.
Networking Neutron 같은 OpenStack은 계산과 같은 다른 OpenStack은 서비스를위한 서비스로 네트워크 연결을 가능하게합니다. 사용자가 그들로 네트워크 및 첨부 파일을 정의 할 API를 제공합니다. 많은 인기 네트워킹 공급 업체 및 기술을 지원하는 플러그 인 구조를 가지고 있습니다.
저장
Object Storage Swift 상점과를 통해 임의의 구조화되지 않은 데이터 객체를 검색 편안하고 , HTTP 기반 API. 그것은 매우 자사의 데이터 복제와 내결함성 및 아키텍처를 확장한다. 그것의 구현은 마운트 디렉토리와 파일 서버처럼되지 않습니다.
Block Storage Cinder 실행중인 인스턴스에 영구 블록 스토리지를 제공합니다. 드라이버, 플러그 구조는 블록 스토리지 장치의 생성 및 관리를 용이하게한다.
공유 서비스
Identity service Keystone 다른 OpenStack은 서비스에 대한 인증 및 권한 부여 서비스를 제공합니다. 모든 OpenStack은 서비스에 대한 엔드 포인트의 카탈로그를 제공합니다.
Image Service Glance 상점 및 가상 머신 디스크 이미지를 검색합니다. OpenStack은 계산은 예를 제공 동안이 사용합니다.
Telemetry Ceilometer모니터 및 미터 청구, 벤치마킹, 확장 성 및 통계 목적으로 OpenStack은 클라우드를.
높은 수준의 서비스
Orchestration Heat 기본 중 하나를 사용하여 여러 복합 클라우드 애플리케이션 지휘 HOT OpenStack은 네이티브 REST API와 CloudFormation 호환 쿼리 API 모두를 통해, 템플릿 형식 또는 AWS CloudFormation 템플릿 형식을.
Database Service

Trove

관계형 및 비 관계형 데이터베이스 엔진 모두를위한 확장 가능하고 신뢰할 수있는 클라우드 데이터베이스 서비스로서의 기능을 제공합니다.

개념적 아키텍처


  • OpenStack은 네트워킹 세 노드 아키텍처 (neutron)


최소 구성사양

  • Controller Node: 1 processor, 2 GB memory, and 5 GB storage

  • Network Node: 1 processor, 512 MB memory, and 5 GB storage

  • Compute Node: 1 processor, 2 GB memory, and 10 GB storage



+ Recent posts