目录**
项目中,有2台IBMp570的服务器,划分了6个逻辑分区。在这6个分区中,2个用于Oracle数据库的RAC,4个用于WAS。存储设备为IBM DS4800,配有16块硬盘,通过RAID 5对磁盘数据进行保护,一共划分了13块136GB的逻辑硬盘。在这些逻辑硬盘中,指定了6块用于RAC和WAS。
根据项目方案建议书,GPFS的规划如下表:
GPFS用途 | GPFS名称 | LPAR | GPFS集群名 | 逻辑磁盘构成 |
RAC | Dbdata | SR11、SR24 | Rac_Cluster | hdisk2、dhsik3、hdisk4、hdisk5 |
WAS | Srccode | SR12、SR21、SR22、SR23 | WAS_Cluster | hdisk2、hdisk3 |
注 :上述逻辑磁盘在DS4800端通过WWN进行隔离,用途不同的逻辑分区相互看不到对方的逻辑磁盘。
修改AIX主机文件/etc/hosts,增加以下内容:
#GPFS+RAC Interconn
11.11.11.1 sr21_rac
11.11.11.2 sr22_rac
11.11.11.3 sr23_rac
11.11.11.5 sr11_rac
11.11.11.6 sr12_rac
11.11.11.4 sr24_rac sr24
10.10.8.103 sr23_svc sr23
把集群中的节点地址加入文件 ” ~/.rhosts ”
安装软件光盘中的软件包,并把软件补丁升级到3.2.1.4。安装完成后,检查以下文件列表:
在目录/etc/gpfs_config下建立集群节点描述文件gpfs_nodes,内容如下:
s r24_rac :quorum-manager
sr23_svc :quorum-manager
在目录/etc/gpfs_config下建立集群磁盘描述文件gpfs_disk_rac,内容如下:
hdisk2:::dataAndMetadata:1:racnsd01
h disk 3:::dataAndMetadata:1:racnsd02
h disk 4:::dataAndMetadata:1:racnsd03
h disk 5:::dataAndMetadata:1:racnsd04
在目录/etc/gpfs_config下建立TieBreaker磁盘描述文件 gpfs_disks_tb ,内容如下:
H disk6:::::racnsd_tb1
H disk7:::::racnsd_tb2
H disk8:::::racnsd_tb3
命令: # mmcrcluster -N gpfs_nodes -p sr11_rac -s sr24_rac -C rac_cluster –A
显示:
Wed Jul 30 11:16:37 BEIST 2008: mmcrcluster: Processing node sr11_rac
Wed Jul 30 11:16:38 BEIST 2008: mmcrcluster: Processing node sr24_rac
mmcrcluster: Command successfully completed
mmcrcluster: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
命令: # mmlscluster
显示:
GPFS cluster information
========================
GPFS cluster name: rac_cluster.sr11_rac
GPFS cluster id: 795741876481154325
GPFS UID domain: rac_cluster.sr11_rac
Remote shell command: /usr/bin/rsh
Remote file copy command: /usr/bin/rcp
GPFS cluster configuration servers:
-----------------------------------
Primary server: sr11_rac
Secondary server: sr24_rac
Node Daemon node name IP address Admin node name Designation
-----------------------------------------------------------------------------------------------
1 sr11_rac 11.11.11.5 sr11_rac quorum-manager
2 sr24_rac 11.11.11.4 sr24_rac quorum-manager
命令: # mmcrnsd -F gpfs_disks_rac
显示:
mmcrnsd: Processing disk hdisk2
mmcrnsd: Processing disk hdisk3
mmcrnsd: Processing disk hdisk4
mmcrnsd: Processing disk hdisk5
mmcrnsd: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
命令: # mmlsnsd
显示:
File system Disk name NSD servers
---------------------------------------------------------------------------
(free disk) racnsd01 (directly attached)
(free disk) racnsd02 (directly attached)
(free disk) racnsd03 (directly attached)
(free disk) racnsd04 (directly attached)
命令: # mmcrnsd -F gpfs_disks_tb
显示:
mmcrnsd: Processing disk hdisk6
mmcrnsd: Processing disk hdisk7
mmcrnsd: Processing disk hdisk8
mmcrnsd: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
命令: # mmchconfig tiebreakerDisks="racnsd_tb1;racnsd_tb2;racnsd_tb3"
显示:
Verifying GPFS is stopped on all nodes ...
mmchconfig: Command successfully completed
mmchconfig: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
命令: # mmstartup -a
显示:
Wed Jul 30 11:25:36 BEIST 2008: mmstartup: Starting GPFS ...
命令: # mmcrfs /dbdata dbdata -F gpfs_disks_rac -A yes -B 512k -M2 -m2 -R2 -r2 -n 8 -N 5 00 0000
显示:
The following disks of dbdata will be formatted on node sr24:
racnsd01: size 142606336 KB
racnsd02: size 142606336 KB
racnsd03: size 142606336 KB
racnsd04: size 142606336 KB
Formatting file system ...
Disks up to size 593 GB can be added to storage pool 'system'.
Creating Inode File
Creating Allocation Maps
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool 'system'
Completed creation of file system /dev/dbdata.
mmcrfs: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
在目录/etc/gpfs_config下建立集群节点描述文件gpfs_nodes,内容如下:
S r 12 _rac :quorum-manager
Sr 21 _svc :quorum-manager
Sr 22 _svc :quorum
Sr 23 _svc :quorum
在目录/etc/gpfs_config下建立集群磁盘描述文件gpfs_disk_was,内容如下:
hdisk2:::dataAndMetadata:1: was nsd01
h disk 3:::dataAndMetadata:1: was nsd02
在目录 /etc/gpfs_config 下建立 TieBreaker 磁盘描述文件 gpfs_disks_tb ,内容如下:
H disk4:::::wasnsd_tb1
H disk5:::::wasnsd_tb2
H disk6:::::wasnsd_tb3
命令: # mmcrcluster -N gpfs_nodes -p sr12_rac -s sr21_rac -C was _ cluster -A
显示:
Wed Jul 30 13:24:10 BEIST 2008: mmcrcluster: Processing node sr12_rac
Wed Jul 30 13:24:11 BEIST 2008: mmcrcluster: Processing node sr21_rac
Wed Jul 30 13:24:12 BEIST 2008: mmcrcluster: Processing node sr22_rac
Wed Jul 30 13:24:13 BEIST 2008: mmcrcluster: Processing node sr23_rac
mmcrcluster: Command successfully completed
mmcrcluster: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
命令: # mmlscluster
GPFS cluster information
========================
GPFS cluster name: wascluster.sr12_rac
GPFS cluster id: 795741880776129274
GPFS UID domain: wascluster.sr12_rac
Remote shell command: /usr/bin/rsh
Remote file copy command: /usr/bin/rcp
GPFS cluster configuration servers:
-----------------------------------
Primary server: sr12_rac
Secondary server: sr21_rac
Node Daemon node name IP address Admin node name Designation
-----------------------------------------------------------------------------------------------
1 sr12_rac 11.11.11.6 sr12_rac quorum-manager
2 sr21_rac 11.11.11.1 sr21_rac quorum-manager
3 sr22_rac 11.11.11.2 sr22_rac quorum
4 sr23_rac 11.11.11.3 sr23_rac quorum
命令: # mmcrnsd -F gpfs_disks_ was
显示:
mmcrnsd: Processing disk hdisk2
mmcrnsd: Processing disk hdisk3
mmcrnsd: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
命令: # mmcrnsd -F gpfs_disks_tb
显示:
mmcrnsd: Processing disk hdisk4
mmcrnsd: Processing disk hdisk5
mmcrnsd: Processing disk hdisk6
mmcrnsd: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
命令: # mmlsnsd
显示:
File system Disk name NSD servers
---------------------------------------------------------------------------
(free disk) wasnsd01 (directly attached)
(free disk) wasnsd02 (directly attached)
(free disk) wasnsd_tb1 (directly attached)
(free disk) wasnsd_tb2 (directly attached)
(free disk) wasnsd_tb3 (directly attached)
命令: # mmchconfig tiebreakerDisks="racnsd_tb1;racnsd_tb2;racnsd_tb3"
显示:
Verifying GPFS is stopped on all nodes ...
mmchconfig: Command successfully completed
mmchconfig: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
命令: # mmstartup -a
显示:
Wed Jul 30 13:31:52 BEIST 2008: mmstartup: Starting GPFS ...
命令: # mmcrfs /srccode srccode -F gpfs_disks_was -A yes -B 512k -M2 -m2 -R2 -r2 -n 8 -N 100 0000
显示:
The following disks of srccode will be formatted on node sr21:
wasnsd01: size 142606336 KB
wasnsd02: size 142606336 KB
Formatting file system ...
Disks up to size 295 GB can be added to storage pool 'system'.
Creating Inode File
Creating Allocation Maps
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool 'system'
Completed creation of file system /dev/srccode.
mmcrfs: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
**
以下为GPFS常用命令及语法,具体参数说明见GPFS参考文档: General Parallel File System Administration and Programming Reference V3.2
语法:
mmstartup [ -a | -N { Node [, Node ...] | NodeFile | NodeClass }] [ -E_EnvVar_ = value ...]
命令:mmstartup – a
语法:
mmshutdown [ -t_UnmountTimeout_ ] [ -a | -N { Node [, Node ...] | NodeFile | NodeClass }]
命令:mmshutdown – a
语法:
mmmount {_Device_|_DefaultMountPoint_| all | all_local | all_remote} [-o_MountOptions_] [-a | -N {_Node_[,_Node_...] |_NodeFile_|_NodeClass_}]**
命令:mmmount srccode – a,或mmmount dbdata – a。
语法:
mmumount {_Device_|_MountPoint_| all | all_local | all_remote} [-f ] [-a | -N {_Node_[,_Node_...] |_NodeFile_|_NodeClass_}]**
命令: mmumount srccode – a或mmumount dbdata -a
# more gpfs_disks_new
hdisk3:::dataAndMetadata:2:wasnsd02
命令: # mmadddisk srccode -F ./gpfs_disks_new -r
显示:
The following disks of srccode will be formatted on node sr12:
wasnsd02: size 142606336 KB
Extending Allocation Map
Checking Allocation Map for storage pool 'system'
Completed adding disks to file system srccode.
mmadddisk: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
Restriping srccode ...
Scanning file system metadata, phase 1 ...
Scan completed successfully.
Scanning file system metadata, phase 2 ...
Scan completed successfully.
Scanning file system metadata, phase 3 ...
Scan completed successfully.
Scanning file system metadata, phase 4 ...
Scan completed successfully.
Scanning user file metadata ...
Scan completed successfully.
Done
命令: # mmdeldisk srccode wasnsd02
显示:
Deleting disks ...
Scanning system storage pool
Scanning file system metadata, phase 1 ...
Scan completed successfully.
Scanning file system metadata, phase 2 ...
Scan completed successfully.
Scanning file system metadata, phase 3 ...
Scan completed successfully.
Scanning file system metadata, phase 4 ...
Scan completed successfully.
Scanning user file metadata ...
Attention: 9 out of 22 user or system files are not properly replicated:
The desired replication factor exceeds the number of available failure groups.
Checking Allocation Map for storage pool 'system'
Attention: A disk being removed reduces the number of failure groups to 1,
which is below the number required for replication: 2.
New blocks will be allocated from the remaining disks,
but files will be unreplicated and hence at risk.
tsdeldisk64 completed.
mmdeldisk: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
命令: # mmdelnsd "wasnsd02"
显示:
mmdelnsd: Processing disk wasnsd02
mmdelnsd: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
语法: mmgetstate [ -L ] [ -s ] [ -v ] [ -a | -N { Node [, Node ...] | NodeFile | NodeClass }]
命令:mmgetstate – a – L
如果觉得我的文章对您有用,请点赞。您的支持将鼓励我继续创作!
赞6
添加新评论0 条评论