Tuesday 22 October 2013

How to configure OS for RAC :



Hardware requirment:

1.server
2.shared disk -das-san-nas
3.nodes-mininum-2
                nic -2
                microprocessor -pentium 4 or higher
                ram -2gb or higher


SOFTWARE REQUIRMENT.

1.OS -RHEL -5.4 ONWARDS
2.CLUSTERWARE SOFTWARE ORACLE 11G
3.DATABASE SOFTWARE ORACLE11G
4.REQUIRED -OCFS MODULES-ONLY FOR CLUSTERFILES
                          -ASM PACKAGES-TO STORE DATAFILES
                                                            OR CLUSTERWARE FILES


CONSIDERATION DURING OS INSTALLATION
1.NODE NAME
2.IP ADRESS CONFIGURATION
3.SELECT  "NO FIREWALL"
4.DISABLE "SECURITY LINUX"

CUSTOM PACKAGE SELECTION

1.ENABLED
2.GNOME DESKTOP ENVIRONMENT
3.EDITORS
4.GRAPHICAL INTERNET
5.TEXT BASED INTERNET
6.DEVELOPMENT LIBRARIES
7.DEVELOPMENT TOOLS
8.SERVER CONFIGURATION TOOOLS
9.ADMINISTRATION TOOLS
10.BASE
11.SYSTEM TOOLS
12.XWINDOWS SYSTEM

after sussesfull os installation install the individual software --it will be in the dvd-server folders
1# rpm -ivh libaio -devel-0.3.106-3.2.i386.rpm
2#rpm -ivh sysstat -7.0.2-3.els.i386.rpm
3#rpm -ivh unixodbc -2.2.11-7.1.i386.rpm
4#rpm -ivh unixodbc -devel-2.2.11-7.1.i386.rpm
5#rpm -ivh iscsi-initator-utils-6.2.0.871-0-10.els.i386.rpm


NOW INSTALLATION STARTS

#neat -gui tool
etho  inactive ,eth01 inactive

select edit -enter ip address subnet mask

after providing ip address and subnet mask
verifying
#service network restart
verify ip
#ifconfig
verify hostname
#hostname
configuring /etc/hosts file
#vim /etc/hosts
add the etho and eth1 ip adress for both the nodes
verify from node1
#ping node1
#ping node priv
verify from node 2
#ping node2
#ping node2 priv

configuring kernel parameters
#vim /etc/sysctl.conf

kernel.shmmax = 4294967275
kernel.sem =250 32000 100 128
net .core .rmem_default =262144
net . core .rmem_max =4194304
net .core .wmem_default =262144
net.core.wmem_max=1048586
net.ipv4.ip-local_port_range=9000 65500
fs.file-max =6815744
kernel . hostname =node1 name
kernel . domainname =yis.co.in(name for example)

verify the kernel parameters
#sysctl -p
#hostname
#domain name



configuring services

#/etc/rc.d/init.d/iptables status
#chkconfig sendmail off
#chkcondig cups off
#chkconfig xinetd on
#chkconfig telnet on
#chkconfig vsftpd on
#service xinetd restart
#service vsftpd restart

creating oracle user

*creating a group
#groupadd -g 800 dba

*creating  a oracle user
#useradd -u 555 -g 800 -md  /yis/yashu
#passwd yashu

*creating a directory
#mkdir /yis/cluster
#chown -R yashu:dba /yis/cluster

*configuring shell limits
#vim /etc/security/limits.conf

yashu  soft  nproc  2047

yashu  hard  nproc  16384

yashu  soft  nofile  1024

yashu  hard   nofile   65536

configuring date & time

node 1 time should be more than 20 sec for node 2

configuring hang check -timer
hang check timer

 *to find the package
#find /lib/modules  -name "hangcheck-timer.ko"
#vim /etc/modprobe.conf
options hangcheck-timer  hangcheck-tick=30 hangcheck-margin=180

#vim /etc/rc.local
/sbin/modprobe hangcheck-timer

*updating modprobe.conf file
#modprobe  hangcheck-timer

*to verify
#grep Hangcheck /var/log/messages/tail.2

*configuring remote shell
rsh
#rpm -qa rsh*
#chkconfig rsh on
#chkconfig rlogin on
#service xinetd restart

workaround
#which rsh
/usr/kerheros/bin/rsh

#mv /usr/kerheros/bin/rsh  /usr/kerheros/bin/rsh.original

#mv /usr/kerherous/bin/rcp /usr/kerherous/bin/rcp.original

#mv /usr/kerherous/bin/rlogin  /usr/kerherous/bin/rlogin.original

#which rsh
/usr/bin/rsh

configuring user equipvalency
#vim /etc/hosts.equiv
 + node1  yashu
 + node1priv  yashu

 + node2   yashu
 + node2priv yashu

#chmod 600 /etc/hosts.equiv

*testing the configuration(not in terminal mode  login as oracle user from node1)
$rsh node2 ls-l /yis/cluster
$rsh node2 touch manju

*configuring shared storage device ocfs2 in root user  from node
*download ocfs2 packages(with respective  os version)
*install ocfs2 packages
*ensure selinux is disabled

to verify:
#/usr/bin/system-config-securitylevel &
#ocfs2 console &    ------its a gui

cluster->clusternodes->add->information of node1 & node2
name-
address-
portno-7777 default
apply -quit in file

verify
#cat /etc/ocfs2/cluster.conf
it should show at last -node -count:2

configuring o2cb service
 step1...reconfigure o2cb
#chkconfig --del o2cb
#chkconfig --add o2cb
#chkconfig --list o2cb

step2…
unload o2cb modules
#/etc/init.d/o2cb  offline  ocfs2
#/etc/init.d/o2cb  unload
#/etc/init.d/o2cb status

step3…..
configuring  o2cb on boot

#/etc/init.d/o2cb configure
load o2cb driver on boot :y
cluster to start on boot :ocfs2
specify heartbeat dead threshold :600

step4…..
configuring ocfs2 file system
#fdisk -l

formatting /dev/sdb1 with ocfs2 filesystem
*gui -using ocfs2 console
*cli-using mkfs command

**in formatting and mounting do from one node1
#mkfs.ocfs2 -b 4k -c 32k -N 4 -l "oradatafiles"/dev/sdb1  …..in cli mode

****in gui mode

using ocfs2 console &->tasks->edit->ok

mounting ocfs2 file system:

from one mode only:

#mount -t ocfs2 -o datavolume,nointr/dev/sdb1  /yis/cluster

verify
#df -h

will show…>/dev/sdb1……./yis/cluster

to make permanent mounting:

#vim /etc/fsstab
LABEL=oradatafiles /yis/cluster ocfs2 _netdev,datavolume,nointr 0 0

reboot
#init 6

to verify
#df -h

changing ownership:


#chown -R yashu:dba /yis/cluster
#chmod -R 775  /yis/cluster

reboot all the nodes:

cosideration;

*server should be on
*start node1
*start node 2

configuring shared storage device ASM

*DOWNLOAD ASM packages with respective os version
*install asm packages (wrt rhel 5.4)
 oracle asm-support -2.7.3-1
 oracle asm -2.6.18-164
oracle asmlib -2.0.4-1


/etc/init.d/oracleasm………>>>>>this file will be created

CREATING ASM DISK: NODE1

*done from only one node
#/etc/init.d/oracleasm  create disk VOL1 /dev/sdc1

CONFIGURING ASM DISK

*done from only one mode ……>node1
#/etc/init.d/oracleasm configure
default user to own the driver interface :yashu
default group to own the driver interface :dba

start oracleasm library driver on boot:y
fix permissions of oracle ams disks on boot:y

SCAN ASM DISKS
#/etc/init.d/oracleasm  scandisks

LISTING ASM DISKS

#/etc/init.d/oracleasm  list disks
will show vol1&vol2

OUERY ASM DISKS

#/etc/init.d/oracleasm  querydisk  /dev/sdc1
                        OR
#/etc/init.d/oracleasm  querydisk  VOL1………>SUGGESTED BY ORACLE


CREATING ASM DISK IN NODE2.


*DONT PERFORM 1 &2 STEPS FROM NODE1
*PERFORM STEP 3 TO STEP 5
*SCAN DISKS,LISTDISKS,QUERY DISK…….PERFORM  SUGGESTED QUERY.


No comments:

Post a Comment