现在的位置: 首页 > 综合 > 正文

不看死不瞑目的文档 :Solaris container – Solaris Zone Example

2013年01月25日 ⁄ 综合 ⁄ 共 11584字 ⁄ 字号 评论关闭

Using Zones An Example

The following example demonstrates the features provided by zones that facilitate consolidation. It shows how to run the two Oracle workloads from the Managing Workloads example on page 22 in a Solaris Container using zones. In that example, both workloads shared the same physical system as well as the file system namespace, name service, network port namespace, user and group namespaces, and more. The sharing of these namespaces can lead to undesirable and sometimes difficult to manage situations, such as when the databases are managed by two different DBA groups. The fact that there is only one oracle user requires close coordination between the DBA groups, since changes made to that users environment by one DBA group may impact the other database instance. The same holds true for the sharing of the file system namespace, where a single /var/opt/oratab file is used by multiple Oracle instances.

Sharing namespaces can also inhibit the consolidation from a large number of servers onto fewer systems. Existing procedures and scripts may, for example, assume the system is dedicated to the application. Making changes to these procedures and scripts may be difficult, costly or even impossible. Solaris Zones help resolve these issues because each zone is a virtualized environment with its own private namespaces that can be managed independently of other zones on the system.

For instance, the oracle user in one zone is a completely different user from the oracle user in another zone they can have different uids, passwords, login shells, home directories, etc. By running each Oracle instance in its own zone, the instances can be completely isolated from each other, simplifying their management. As far as the Oracle instance is concerned, it still runs on a dedicated system.

Requirements

Two zones each running their own Oracle instance are created. The zones require approximately 100 MB of disk space, and the Oracle software and a database each require about 4 GB of disk space.

Note In this chapter, the prompt is set to the zone name to distinguish between the different zones.

Preparation

The Oracle instances for the sales and marketing databases are recreated in Zones in this example. Consequently, the existing instances created in Chapter 4 should be stopped and the associated user, projects and file system should be deleted. The pool configuration built in Chapter 6 should be disabled.

global # svcadm disable salesdb

global # svcadm disable mktdb

global # svccfg delete salesdb

global # svccfg delete mktdb

global # userdel -r oracle

global # projdel ora_sales

global # projdel ora_mkt

global # projdel group.dba

global # pooladm -x

global # pooladm -d

Creating the First Zone

The zone used for the marketing database is named mkt. To show how a file system is added to a zone, a separate file system is created on a SVM soft partition (d200). The file system may, of course, also be created on a standard disk slice. The virtual network interface for the zone with IP address 192.168.1.14 is configured on the physical interface hme0 of the system. The directory for the zone is created in the global zone by the global zone administrator. The directory used for the zone must be owned by root and have mode 700 to prevent normal users in the global zone from accessing the zones file system.

global # mkdir -p /export/zones/mkt

global # chmod 700 /export/zones/mkt

global # newfs /dev/md/rdsk/d200

Configuring the Zone

The zone is created based on the default template that defines resources used in a typical zone.

global # zonecfg -z mkt

mkt: No such zone configured

Use ’create’ to begin configuring a new zone.

zonecfg:mkt> create

zonecfg:mkt> set zonepath=/export/zones/mkt

zonecfg:mkt> set autoboot=true

The virtual network interface with IP address 192.168.1.14 is configured on the hme0 interface of the global zone.

zonecfg:mkt> add net

zonecfg:mkt:net> set address=192.168.1.14/24

zonecfg:mkt:net> set physical=hme0

zonecfg:mkt:net> end

The file system for the Oracle binaries and datafiles in the mkt zone is created on a soft partion named d200 in the global zone. Add the following statements to the zone configuration to have the file system mounted in the zone automatically when the zone boots:

zonecfg:mkt> add fs

zonecfg:mkt:fs> type=ufs

zonecfg:mkt:fs> set type=ufs

zonecfg:mkt:fs> set special=/dev/md/dsk/d200

zonecfg:mkt:fs> set raw=/dev/md/rdsk/d200

zonecfg:mkt:fs> set dir=/u01

zonecfg:mkt:fs> end

zonecfg:mkt> verify

zonecfg:mkt> commit

zonecfg:mkt> exit

The zone configuration is now complete. The verify command verifies that the current configuration is syntactically correct. The commit command writes the in-memory configuration to stable storage.

Installing the Zone

The zone is now ready to be installed on the system.

global # zoneadm -z mkt install

Preparing to install zone <mkt>.

Checking <ufs> file system on device </dev/md/rdsk/d200> to be mounted

at </export/zones/mkt/root>

Creating list of files to copy from the global zone.

Copying <2584> files to the zone.

Initializing zone product registry.

Determining zone package initialization order.

Preparing to initialize <916> packages on the zone.

Initialized <916> packages on zone.

Zone <mkt> is initialized.

The file </export/zones/mkt/root/var/sadm/system/logs/install_log>

contains a log of the zone installation.

Booting the Zone

The zone can be booted with the zoneadm boot command. Since this is the first time the zone is booted after installation, the standard system identification questions must be answered, and are displayed on the zones console. The console can be accessed from the global zone using the zlogin(1M) command.

global # zoneadm -z mkt boot

global # zlogin -C mkt

[Connected to zone 'mkt' console]

SunOS Release 5.10 Version Generic 64-bit

Copyright 1983-2005 Sun Microsystems, Inc. All rights reserved.

Use is subject to license terms.

Hostname: mkt

Loading smf(5) service descriptions: 100/100

At this point, the normal system identification process for a freshly installed Solaris OS instance is started. The output of this process is omitted here for brevity, and the configuration questions concerning the name service, time zone, etc., should be answered as appropriate for the site. After system identification is complete and the root password is set, the zone is ready for use.

SunOS Release 5.10 Version Generic 64-bit

Copyright 1983-2005 Sun Microsystems, Inc. All rights reserved.

Use is subject to license terms.

Hostname: mkt

mkt console login:

To disconnect from the console use ~. (tilde dot) just like in tip(1). The zone can now be accessed over the network using the telnet(1), rlogin(1) or ssh(1) commands, just like a standard Solaris OS system. (Note that root can only login at the console unless the /etc/default/login file is updated).

mkt console login: root

Password:

Last login: Tue Mar 22 21:55:00 on console

Sun Microsystems Inc. SunOS 5.10 Generic January 2005

# df -h

Filesystem size used avail capacity Mounted on

/ 7.9G 4.6G 3.2G 60% /

/dev 7.9G 4.6G 3.2G 60% /dev

/lib 7.9G 4.6G 3.2G 60% /lib

/platform 7.9G 4.6G 3.2G 60% /platform

/sbin 7.9G 4.6G 3.2G 60% /sbin

/u01 7.9G 8.0M 7.8G 1% /u01

/usr 7.9G 4.6G 3.2G 60% /usr

proc 0K 0K 0K 0% /proc

ctfs 0K 0K 0K 0% /system/contract

swap 15G 272K 15G 1% /etc/svc/volatile

mnttab 0K 0K 0K 0% /etc/mnttab

fd 0K 0K 0K 0% /dev/fd

swap 15G 0K 15G 0% /tmp

swap 15G 24K 15G 1% /var/run

The /lib, /platform, /sbin, and /usr file systems are read-only loopback mounts from the global zone. This reduces the required disk space for the zone considerably, and allows the sharing of text pages, leading to more efficient use of memory. These file systems appear in the zone because they are defined in the default template used to create this zone. All other file systems are private to the zone. The /u01 file system is mounted in the zone during zone boot by zoneadmd. It is not mounted by the zone itself. Also note that the zone is unaware that the file system is in fact residing on /dev/md/dsk/d200.

Installing Oracle

The group dba and the user oracle are required to run the Oracle software. Since the Oracle software uses shared memory, and the maximum amount of shared memory is now a project resource control, a project is needed in which to run Oracle. The project ora_mkt project is created in the zone and the project.max-shm-memory is set to the required value (in this case 2 GB). Since the System V IPC parameters are resource controls in Solaris 10 OS, there is no need to update the /etc/system file and reboot.

mkt # mkdir -p /export/home

mkt # groupadd dba

mkt # useradd -g dba -d /export/home/oracle -m -s /bin/bash oracle

mkt # passwd oracle

mkt # projadd -c “Oracle” user.oracle

mkt # projadd -c "Oracle" -U oracle ora_mkt

mkt # projmod -sK "project.max-shm-memory=(privileged,2G,deny)" ora_mkt

mkt # cat /etc/project

system:0::::

user.root:1::::

noproject:2::::

default:3::::

group.staff:10::::

ora_mkt:101:Oracle:oracle::project.max-shm-memory=(privileged,2147483648,deny)

user.oracle:100:Oracle:::project.max-shm-memory=(privileged,2147483648,deny)

Note that the zone has its own namespace and that the user, group and project just created are therefore only visible inside the mkt zone. The Oracle software and the database are installed in /u01. In this example, the Oracle software is installed in the zone itself to create an Oracle installation idependent from any other Oracle installations. The software could also be installed in the global zone and then loopback mounted in the local zones. This would allow sharing of the binaries by multiple zones, but also create a coupling between Oracle installations with regards to patch levels and more. This example shows how to use zones to consolidate Oracle instances with maximum isolation from each other, so in this case the software is not shared. The installation can now be performed as described on page 91. Since /usr is mounted read-only in the zone, the default location /usr/local/bin suggested by the Oracle Installer should be changed to a writable directory in the zone, such as /opt/local/bin. The marketing database can be created using the procedure on page 93.

Using the smf service for the marketing database from Chapter 4 (the Managing Workloads example) the database instance can be started by importing the manifest and enabling the mktdb service in the zone.

Creating the Second Zone

The first zone used a directory in /export/zones in the global zone. Since this does not limit the size of the root file system of the local zone it could fill up the file system in the global zone, where /export/zones is located. To prevent a local zone from creating this problem, the zone root file system is created on a separate file system. The second zone is for the sales database and requires the following resources:

l        A 100 MB file system for the zone root file system mounted in the global zone on /export/zones/sales. This file system is created on a Solaris Volume Manager soft partition (/dev/md/dsk/d100). A normal slice could also be used but would be quite wasteful given the limited number of slices available on a disk.

l        To show how devices can be used in a zone, the disk slice c1t1d0s3 is exported to the zone by the global zone administrator. A UFS file system is created on this slice inside the zone. This requires that both the block and character devices for the slice be exported to the zone. Note that this is for demonstration purposes only and is not the recommended way to use UFS file systems in a zone.

l        A virtual network interface with IP address 192.168.1.15 on the hme0 interface of the global zone is also needed.

global # newfs /dev/md/rdsk/d100

global # mkdir -p /export/zones/sales

global # mount /dev/md/dsk/d100 /export/zones/sales

global # chmod 700 /export/zones/sales

Configuring and Installing the Second Zone

The steps required to configure and install this zone are the same as for the first zone, with the exception that two devices are added to the zone configuration.

global # zonecfg -z sales

sales: No such zone configured

Use 'create' to begin configuring a new zone.

zonecfg:sales> create

zonecfg:sales> set zonepath=/export/zones/sales

zonecfg:sales> set autoboot=true

zonecfg:sales> add net

zonecfg:sales:net> set physical=hme0

zonecfg:sales:net> set address=192.168.1.15/24

zonecfg:sales:net> end

zonecfg:sales> add device

zonecfg:sales:device> set match=/dev/rdsk/c1t1d0s3

zonecfg:sales:device> end

zonecfg:sales> add device

zonecfg:sales:device> set match=/dev/dsk/c1t1d0s3

zonecfg:sales:device> end

zonecfg:sales> verify

zonecfg:sales> commit

zonecfg:sales> exit

global # zoneadm -z sales install

Preparing to install zone <sales>.

Creating list of files to copy from the global zone.

Copying <2584> files to the zone.

Initializing zone product registry.

Determining zone package initialization order.

Preparing to initialize <916> packages on the zone.

Initialized <916> packages on zone.

Zone <sales> is initialized.

The file </export/zones/sales/root/var/sadm/system/logs/install_log>

contains a log of the zone installation.

Booting the Zone

The first time a zone is booted after installation, the system identification process is performed. It is possible to skip the system identification questions during the first boot of the zone by creating a sysidcfg file in the zone prior to the first boot. The location of the sysidcfg file from the global zone is /export/zone/sales/root/etc/sysidcfg. A sample sysidcfg file is shown below, and can be customized to fit the situation.

global # cat /export/zone/sales/root/etc/sysidcfg

system_locale=C

timezone=US/Pacific

network_interface=primary {

抱歉!评论已关闭.