现在的位置: 首页 > 综合 > 正文

OpenStack Grizzly Install Guide—-VLAN Mode(3 Node)

2013年10月13日 ⁄ 综合 ⁄ 共 21494字 ⁄ 字号 评论关闭

这是第二次在物理机上面部署Grizzly,记录下,以免忘记!

Controller Node: iDataPlex M2       | eth0:10.0.1.100|eth1:9.186.91.128|eth2:x

Network Node: System x3550        | eth0:10.0.1.101|eth1:10.20.20.52|eth2:9.186.91.130

Compute Node: System x3950      | eth0:10.0.1.111|eth1:10.20.20.53|eht2:x

OS: Ubuntu 12.04 server

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

1.Controller Node

1.1.Preparing Ubuntu

1) Add Grizzly repositories [Only for Ubuntu 12.04]:

apt-get install -y ubuntu-cloud-keyring
echo deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main >> /etc/apt/sources.list.d/grizzly.list

2) Update system:

apt-get update -y
apt-get upgrade -y
apt-get dist-upgrade -y
Note:In here you should reboot OS, cause 'dist-upgrade' update kernel!

1.2. Networking

#for OpenStack management
auto eth0
iface eth0 inet static
address 10.0.1.100
netmask 255.255.255.0

# For Exposing OpenStack API over the internet
auto eth1
iface eth1 inet static
address 9.186.91.128
netmask 255.255.252.0
network 9.186.88.0
broadcast 9.186.91.255
gateway 9.186.88.1
dns-nameservers 9.0.***
dns-search crl.***.com
service networking restart

1.3.MySQL & RabbitMQ
1) Install MySQL:

apt-get install -y mysql-server python-mysqldb

2) Configure mysql to accept all incoming requests:

sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf
service mysql restart

1.4.RabbitMQ
1) Install RabbitMQ:

apt-get install -y rabbitmq-server

2) Install NTP service:

apt-get install -y ntp

3) Create these databases:

mysql -u root -p

#Keystone
CREATE DATABASE keystone;
GRANT ALL ON keystone.* TO 'keystoneUser'@'%' IDENTIFIED BY 'keystonePass';

#Glance
CREATE DATABASE glance;
GRANT ALL ON glance.* TO 'glanceUser'@'%' IDENTIFIED BY 'glancePass';

#Quantum
CREATE DATABASE quantum;
GRANT ALL ON quantum.* TO 'quantumUser'@'%' IDENTIFIED BY 'quantumPass';

#Nova
CREATE DATABASE nova;
GRANT ALL ON nova.* TO 'novaUser'@'%' IDENTIFIED BY 'novaPass';

#Cinder
CREATE DATABASE cinder;
GRANT ALL ON cinder.* TO 'cinderUser'@'%' IDENTIFIED BY 'cinderPass';

quit;

1.5.Others
1) Install other services:

apt-get install -y vlan bridge-utils

2) Enable IP_Forwarding:

sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf

# To save you from rebooting, perform the following
sysctl net.ipv4.ip_forward=1

1.6.Keystone
1)

apt-get install -y keystone

2) Adapt the connection attribute in the /etc/keystone/keystone.conf to the new database:

vi  /etc/keystone/keystone.conf
connection = mysql://keystoneUser:keystonePass@10.0.1.100/keystone

3) Restart the identity service then synchronize the database:

service keystone restart
keystone-manage db_sync

4) Fill up the keystone database using the two scripts:

#Modify the **HOST_IP** and **EXT_HOST_IP** variables before executing the scripts

wget https://raw.github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/OVS_MultiNode/KeystoneScripts/keystone_basic.sh
wget https://raw.github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/OVS_MultiNode/KeystoneScripts/keystone_endpoints_basic.sh

chmod +x keystone_basic.sh
chmod +x keystone_endpoints_basic.sh

./keystone_basic.sh
./keystone_endpoints_basic.sh

5) Create a simple credential file and load it so you won't be bothered later:

vi creds

#Paste the following:
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin_pass
export OS_AUTH_URL="http://9.186.91.128:5000/v2.0/"

# Load it:
source creds

6) Test Keystone:

keystone user-list

2.7. Glance
1) 

apt-get install -y glance

2) Update /etc/glance/glance-api-paste.ini with:

[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
delay_auth_decision = true
auth_host = 10.0.1.100
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = service_pass

3) Update the /etc/glance/glance-registry-paste.ini with:

[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = 10.0.1.100
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = service_pass

4) Update /etc/glance/glance-api.conf with:

sql_connection = mysql://glanceUser:glancePass@10.0.1.100/glance
[paste_deploy]
flavor = keystone
5) Update the /etc/glance/glance-registry.conf with:
sql_connection = mysql://glanceUser:glancePass@10.0.1.100/glance
[paste_deploy]
flavor = keystone

6) Restart the glance-api and glance-registry services:

service glance-api restart; service glance-registry restart

7) Synchronize the glance database:

glance-manage db_sync

8) To test Glance, upload the cirros cloud image directly from the internet:

glance image-create --name myFirstImage --is-public true --container-format bare --disk-format qcow2 < cirros-0.3.0-x86_64-disk.img
OR:
glance image-create --name myFirstImage --is-public true --container-format bare --disk-format qcow2 --location https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img

9) Now list the image to see what you have just uploaded:

glance image-list

1.8.Quantum
1) 

apt-get install -y quantum-server

2)  Edit the OVS plugin configuration file /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini with:

#Under the database section
[DATABASE]
sql_connection = mysql://quantumUser:quantumPass@10.0.1.100/quantum

#Under the OVS section
[OVS]
tenant_network_type=vlan
network_vlan_ranges = physnet1:1:4094

3) Edit /etc/quantum/api-paste.ini :

[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = 10.0.1.100
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = quantum
admin_password = service_pass

4) Update the /etc/quantum/quantum.conf:

[keystone_authtoken]
auth_host = 10.0.1.100
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = quantum
admin_password = service_pass
signing_dir = /var/lib/quantum/keystone-signing

5) Restart the quantum server:

service quantum-server restart

1.9. Nova
1) 

apt-get install -y nova-api nova-cert novnc nova-consoleauth nova-scheduler nova-novncproxy nova-doc nova-conductor

2) Now modify authtoken section in the /etc/nova/api-paste.ini file to this:

[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = 10.0.1.100
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = service_pass
signing_dirname = /tmp/keystone-signing-nova
# Workaround for https://bugs.launchpad.net/nova/+bug/1154809
auth_version = v2.0

3) Modify the /etc/nova/nova.conf like this:

[DEFAULT]
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/run/lock/nova
verbose=True
api_paste_config=/etc/nova/api-paste.ini
compute_scheduler_driver=nova.scheduler.simple.SimpleScheduler
rabbit_host=10.0.1.100
nova_url=http://10.0.1.100:8774/v1.1/
sql_connection=mysql://novaUser:novaPass@10.0.1.100/nova
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf

# Auth
use_deprecated_auth=false
auth_strategy=keystone

# Imaging service
glance_api_servers=10.0.1.100:9292
image_service=nova.image.glance.GlanceImageService

# Vnc configuration
novnc_enabled=true
novncproxy_base_url=http://9.186.91.128:6080/vnc_auto.html
novncproxy_port=6080
vncserver_proxyclient_address=10.0.1.100
vncserver_listen=0.0.0.0

# Network settings
network_api_class=nova.network.quantumv2.api.API
quantum_url=http://10.0.1.100:9696
quantum_auth_strategy=keystone
quantum_admin_tenant_name=service
quantum_admin_username=quantum
quantum_admin_password=service_pass
quantum_admin_auth_url=http://10.0.1.100:35357/v2.0
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
#If you want Quantum + Nova Security groups
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=quantum
#If you want Nova Security groups only, comment the two lines above and uncomment line -1-.
#-1-firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver

#Metadata
service_quantum_metadata_proxy = True
quantum_metadata_proxy_shared_secret = helloOpenStack

# Compute #
compute_driver=libvirt.LibvirtDriver

# Cinder #
volume_api_class=nova.volume.cinder.API
osapi_volume_listen_port=5900

4) Synchronize database:

nova-manage db sync

5) Restart nova-* services:

cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; done

6) Check for the smiling faces on nova-* services to confirm your installation:

nova-manage service list

1.10.Cinder

1)

apt-get install -y cinder-api cinder-scheduler cinder-volume iscsitarget open-iscsi iscsitarget-dkms

2) Configure the iscsi services  &&Restart  services:

sed -i 's/false/true/g' /etc/default/iscsitarget
service iscsitarget start
service open-iscsi start

3) Configure /etc/cinder/api-paste.ini like the following:

[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
service_protocol = http
service_host = 9.186.91.128
service_port = 5000
auth_host = 10.0.1.100
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = service_pass
signing_dir = /var/lib/cinder

4) Edit the /etc/cinder/cinder.conf to:

[DEFAULT]
rootwrap_config=/etc/cinder/rootwrap.conf
sql_connection = mysql://cinderUser:cinderPass@10.0.1.100/cinder
api_paste_config = /etc/cinder/api-paste.ini
iscsi_helper=ietadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
iscsi_ip_address=10.0.1.100

5) Then, synchronize database:

cinder-manage db sync

6) Restart the cinder services:

cd /etc/init.d/; for i in $( ls cinder-* ); do sudo service $i restart; done

7) Verify if cinder services are running:

cd /etc/init.d/; for i in $( ls cinder-* ); do sudo service $i status; done

1.11. Horizon

1) 

apt-get install -y openstack-dashboard memcached

2) If you don't like the OpenStack ubuntu theme, you can remove the package to disable it:(up to you!^_^)

dpkg --purge openstack-dashboard-ubuntu-theme

3) Reload Apache and memcached:

service apache2 restart; service memcached restart

2. Network Node

2.1. Preparing the Node:

1) Add Grizzly repositories [Only for Ubuntu 12.04]:

apt-get install -y ubuntu-cloud-keyring
echo deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main >> /etc/apt/sources.list.d/grizzly.list

2) Update OS:

apt-get update -y
apt-get upgrade -y
apt-get dist-upgrade -y

3) Install ntp service:

apt-get install -y ntp

4) Configure the NTP server to follow the controller node:

#Comment the ubuntu NTP servers
sed -i 's/server 0.ubuntu.pool.ntp.org/#server 0.ubuntu.pool.ntp.org/g' /etc/ntp.conf
sed -i 's/server 1.ubuntu.pool.ntp.org/#server 1.ubuntu.pool.ntp.org/g' /etc/ntp.conf
sed -i 's/server 2.ubuntu.pool.ntp.org/#server 2.ubuntu.pool.ntp.org/g' /etc/ntp.conf
sed -i 's/server 3.ubuntu.pool.ntp.org/#server 3.ubuntu.pool.ntp.org/g' /etc/ntp.conf

#Set the network node to follow up your conroller node
sed -i 's/server ntp.ubuntu.com/server 10.10.10.51/g' /etc/ntp.conf

service ntp restart

5) Install other services:

apt-get install -y vlan bridge-utils

6)  Enable IP_Forwarding:

sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf

# To save you from rebooting, perform the following
sysctl net.ipv4.ip_forward=1

2.2.Networking

auto eth0
iface eth0 inet static
address 10.0.1.101
netmask 255.255.255.0

auto eth1
iface eth1 inet static
address 10.20.20.52
netmask 255.255.255.0

auto eth2
iface eth2 inet static
address 9.186.91.130
netmask 255.255.252.0
network 9.186.88.0
broadcast 9.186.91.255
gateway 9.186.88.1
dns-nameservers 9.0.****
dns-search crl.***.com

2.3. OpenVSwitch (Part1)
1) 

apt-get install -y openvswitch-switch openvswitch-datapath-dkms

2) Create the bridges:

#br-int will be used for VM integration
ovs-vsctl add-br br-int


ovs-vsctl add-br br-ex


ovs-vsctl add-br br-eth1

2.4. Quantum

1)  Install the Quantum openvswitch agent, l3 agent and dhcp agent:

apt-get -y install quantum-plugin-openvswitch-agent quantum-dhcp-agent quantum-l3-agent quantum-metadata-agent

2)  Edit /etc/quantum/api-paste.ini:

[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = 10.0.1.100
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = quantum
admin_password = service_pass

3 ) Edit the OVS plugin configuration file /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini with:

#Under the database section
[DATABASE]
sql_connection = mysql://quantumUser:quantumPass@10.0.1.100/quantum

#Under the OVS section
[OVS]
tenant_network_type=vlan
network_vlan_ranges = physnet1:1:4094
bridge_mappings = physnet1:br-eth1

#Firewall driver for realizing quantum security group function
[SECURITYGROUP]
firewall_driver = quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

4) Update /etc/quantum/metadata_agent.ini:

# The Quantum user information for accessing the Quantum API.
auth_url = http://10.0.1.100:35357/v2.0
auth_region = RegionOne
admin_tenant_name = service
admin_user = quantum
admin_password = service_pass

# IP address used by Nova metadata server
nova_metadata_ip = 10.0.1.100

# TCP Port used by Nova metadata server
nova_metadata_port = 8775

metadata_proxy_shared_secret = helloOpenStack

5) Make sure that your rabbitMQ IP in /etc/quantum/quantum.conf is set to the controller node:

rabbit_host = 10.0.1.100

#And update the keystone_authtoken section

[keystone_authtoken]
auth_host = 10.0.1.100
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = quantum
admin_password = service_pass
signing_dir = /var/lib/quantum/keystone-signing

6) Edit /etc/sudoers.d/quantum_sudoers to give it full access like this (This is unfortunatly mandatory,After, You must change back----Chmod 440 /etc/sudoers.d/quantum_sudoers)

vi /etc/sudoers/sudoers.d/quantum_sudoers

#Modify the quantum user
quantum ALL=NOPASSWD: ALL

7) Restart all the services:

cd /etc/init.d/; for i in $( ls quantum-* ); do sudo service $i restart; done

2.5. OpenVSwitch (Part2)
1) Edit the eth1&eth2 in /etc/network/interfaces to become like this:

auto eth1
iface eth1 inet manual
up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down

auto br-eth1
iface br-eth1 inet static
address 10.20.20.52
netmask 255.255.255.0

auto eth2
iface eth2 inet manual
up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down
auto br-ex
iface br-ex inet static
address 9.186.91.130
netmask 255.255.252.0
network 9.186.88.0
broadcast 9.186.91.255
gateway 9.186.88.1
dns-nameservers 9.0.***
dns-search crl.***.com

2) Add the eth1 to the br-eth1, Add the eth2 to the br-ex:

#br-eth1 will be used for VM configuration
ovs-vsctl add-port br-eth1 eth1

#br-ex is used to make to VM accessible from the internet
ovs-vsctl add-port br-ex eth2

3. Compute Node

3.1. Preparing the Node

1) Add Grizzly repositories [Only for Ubuntu 12.04]:

apt-get install -y ubuntu-cloud-keyring
echo deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main >> /etc/apt/sources.list.d/grizzly.list

2) Update your system:

apt-get update -y
apt-get upgrade -y
apt-get dist-upgrade -y

3) Reboot (you might have new kernel)
4) Install ntp service:

apt-get install -y ntp

5) Configure the NTP server to follow the controller node:

#Comment the ubuntu NTP servers
sed -i 's/server 0.ubuntu.pool.ntp.org/#server 0.ubuntu.pool.ntp.org/g' /etc/ntp.conf
sed -i 's/server 1.ubuntu.pool.ntp.org/#server 1.ubuntu.pool.ntp.org/g' /etc/ntp.conf
sed -i 's/server 2.ubuntu.pool.ntp.org/#server 2.ubuntu.pool.ntp.org/g' /etc/ntp.conf
sed -i 's/server 3.ubuntu.pool.ntp.org/#server 3.ubuntu.pool.ntp.org/g' /etc/ntp.conf

#Set the compute node to follow up your conroller node
sed -i 's/server ntp.ubuntu.com/server 10.10.10.51/g' /etc/ntp.conf

service ntp restart

6) Install other services:

apt-get install -y vlan bridge-utils

7) Enable IP_Forwarding:

sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf

# To save you from rebooting, perform the following
sysctl net.ipv4.ip_forward=1


3.2 Networking



auto eth0
iface eth0 inet static
address 10.0.1.111
netmask 255.255.255.0
dns-nameservers 9.0.146.50

auto eth1
iface eth1 inet static
address 10.20.20.53
netmask 255.255.255

3.3. KVM
1) make sure that your hardware enables virtualization:


apt-get install -y cpu-checker
kvm-ok

2) Normally you would get a good response. Now, move to install kvm and configure it:

apt-get install -y kvm libvirt-bin pm-utils

3) Edit the cgroup_device_acl array in the /etc/libvirt/qemu.conf file to:

cgroup_device_acl = [
"/dev/null", "/dev/full", "/dev/zero",
"/dev/random", "/dev/urandom",
"/dev/ptmx", "/dev/kvm", "/dev/kqemu",
"/dev/rtc", "/dev/hpet","/dev/net/tun"
]

4) Delete default virtual bridge:

virsh net-destroy default
virsh net-undefine default

5) Enable live migration by updating /etc/libvirt/libvirtd.conf file:

listen_tls = 0
listen_tcp = 1
auth_tcp = "none"

6) Edit libvirtd_opts variable in /etc/init/libvirt-bin.conf file:

env libvirtd_opts="-d -l"

7) Edit /etc/default/libvirt-bin file:

libvirtd_opts="-d -l"

8) Restart the libvirt service and dbus to load the new values:

service dbus restart && service libvirt-bin restart

3.4. OpenVSwitch

1) 

apt-get install -y openvswitch-switch openvswitch-datapath-dkms

2) Create the bridges:

#br-int will be used for VM integration
ovs-vsctl add-br br-int

#br-eth1 will be used for VM configuration
ovs-vsctl add-br br-eth1

3.5. Quantuam
1)

apt-get -y install quantum-plugin-openvswitch-agent

2) Edit the OVS plugin configuration file /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini with:

#Under the database section
[DATABASE]
sql_connection = mysql://quantumUser:quantumPass@10.0.1.100/quantum
#Under the OVS section
[OVS]
tenant_network_type=vlan
network_vlan_ranges = physnet1:1:4094
bridge_mappings = physnet1:br-eth1
#Firewall driver for realizing quantum security group function
[SECURITYGROUP]
firewall_driver = quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

3) Make sure that your rabbitMQ IP in /etc/quantum/quantum.conf is set to the controller node:

rabbit_host = 10.0.1.100

#And update the keystone_authtoken section

[keystone_authtoken]
auth_host = 10.0.1.100
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = quantum
admin_password = service_pass
signing_dir = /var/lib/quantum/keystone-signing

4) Edit the eth1in /etc/network/interfaces to become like this: 

auto br-eth1
iface br-eth1 inet static
address 10.20.20.53
netmask 255.255.255

auto eth1
iface eth1 inet manual
up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down

5)  Add the eth1 to the br-eth1:

ovs-vsctl add-port br-eth1 eth1

6) Restart all the services:

service quantum-plugin-openvswitch-agent restart

3.6. Nova
1) 

apt-get install -y nova-compute-kvm

2) Now modify authtoken section in the /etc/nova/api-paste.ini file to this:

[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = 10.0.1.100
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = service_pass
signing_dirname = /tmp/keystone-signing-nova
# Workaround for https://bugs.launchpad.net/nova/+bug/1154809
auth_version = v2.0

3) Edit /etc/nova/nova-compute.conf file

[DEFAULT]
libvirt_type=kvm
libvirt_ovs_bridge=br-int
libvirt_vif_type=ethernet
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
libvirt_use_virtio_for_bridges=True

4) Modify the /etc/nova/nova.conf like this:

[DEFAULT]
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/run/lock/nova
verbose=True
api_paste_config=/etc/nova/api-paste.ini
compute_scheduler_driver=nova.scheduler.simple.SimpleScheduler
rabbit_host=10.0.1.100
nova_url=http://10.0.1.100:8774/v1.1/
sql_connection=mysql://novaUser:novaPass@10.0.1.100/nova
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf

# Auth
use_deprecated_auth=false
auth_strategy=keystone

# Imaging service
glance_api_servers=10.0.1.100:9292
image_service=nova.image.glance.GlanceImageService

# Vnc configuration
novnc_enabled=true
novncproxy_base_url=http://9.186.91.128:6080/vnc_auto.html
novncproxy_port=6080
vncserver_proxyclient_address=10.0.1.111   #!!!!!!
vncserver_listen=0.0.0.0

# Network settings
network_api_class=nova.network.quantumv2.api.API
quantum_url=http://10.0.1.100:9696
quantum_auth_strategy=keystone
quantum_admin_tenant_name=service
quantum_admin_username=quantum
quantum_admin_password=service_pass
quantum_admin_auth_url=http://10.0.1.100:35357/v2.0
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
#If you want Quantum + Nova Security groups
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=quantum
#If you want Nova Security groups only, comment the two lines above and uncomment line -1-.
#-1-firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver

#Metadata
service_quantum_metadata_proxy = True
quantum_metadata_proxy_shared_secret = helloOpenStack

# Compute #
compute_driver=libvirt.LibvirtDriver

# Cinder #
volume_api_class=nova.volume.cinder.API
osapi_volume_listen_port=5900
cinder_catalog_info=volume:cinder:internalURL

5) Restart nova-* services:

cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; done

6) Check for the smiling faces on nova-* services to confirm your installation:

nova-manage service list

4. launch your first VM:

To start your first VM, we first need to create a new tenant, user, internal and external network. SSH to your controller node and perform the following.

1) Create a new tenant:

keystone tenant-create --name project_one
keystone role-list

2) Create a new user and assign the member role to it in the new tenant (keystone role-list to get the appropriate id):

keystone user-create --name=user_one --pass=user_one --tenant-id $put_id_of_project_one --email=user_one@domain.com
keystone user-role-add --tenant-id $put_id_of_project_one  --user-id $put_id_of_user_one --role-id $put_id_of_member_role

3) Create a new network for the tenant:

quantum net-create --tenant-id $put_id_of_project_one net_proj_one --provider:network_type vlan --provider:physical_network physnet1 --provider:segmentation_id 1024

4) Create a new subnet inside the new tenant network:

quantum subnet-create --tenant-id $put_id_of_project_one net_proj_one 192.168.1.0/24

5) Create a router for the new tenant:

quantum router-create --tenant-id $put_id_of_project_one router_proj_one


6) Add the router to the subnet:

quantum router-interface-add $put_router_proj_one_id_here $put_subnet_id_here

7) Create your external network with the tenant id belonging to the service tenant (keystone tenant-list to get the appropriate id)

keystone tenant-list
quantum net-create --tenant-id $put_id_of_service_tenant ext_net --router:external=True

8) Create a subnet containing your floating IPs:

quantum subnet-create --tenant-id $put_id_of_service_tenant --allocation-pool start=9.186.91.131,end=9.186.91.191 --gateway 9.186.88.1 ext_net 9.186.88.100/22 --enable_dhcp=False

9) Set the router for the external network:

quantum router-gateway-set $put_router_proj_one_id_here $put_id_of_ext_net_here

VMs gain access to the metadata server locally present in the controller node via the external network. To create that necessary connection perform the following:

10) Get the IP address of router proj one:

quantum port-list -- --device_id <router_proj_one_id> --device_owner network:router_gateway

11) Add the following route on controller node only:

route add -net 192.168.1.0/24 gw $router_proj_one_IP









抱歉!评论已关闭.