Wednesday, 3 June 2015

Citrix Xenserver VDI storage space not released

If you have used Citrix XenServer you will find very quickly that using the XenCenter interface will create more problems than can be avoided by learning the CLI interface.
When you create an snapshot and delete it the VDI remain in the disk reducing your available space and creating many other problems.

Fist find out your storage repositories, VM's and VDI's and match what disk you are actually using and what are not used. Label all you used VDI properly to reduce the change of a mistake. Add meaningful descriptions and names.

xe sr-list
xe vm-list
xe vdi-list

In my case I have only about 10 used VDI and about 80 show with xe vdi-list. So how to find what is used and what is not.

first I select any random vdi that has no name or description ( remember we added descriptions to all our used VDI)

xe sr-list
uuid ( RO)                : 07ecca1d-a964-0edb-4091-67b79026003b
          name-label ( RW): VM's
    name-description ( RW): NFS SR [192.168.101.2:/volume1/vm]
                host ( RO): xen0
                type ( RO): nfs
        content-type ( RO):



I know my SR and I used to find out my vdi

xe vdi-list sr-uuid=07ecca1d-a964-0edb-4091-67b79026003b

      uuid ( RO)                : e37278ff-0602-4f87-b60e-73d3dfeae484
          name-label ( RW): base copy
    name-description ( RW):
             sr-uuid ( RO): 07ecca1d-a964-0edb-4091-67b79026003b
        virtual-size ( RO): 250060210176
            sharable ( RO): false
           read-only ( RO): true

I pick a VDI that seem to not be label with name or description and I check if it is used. Nothing is returned.

  xe vbd-list vdi-uuid=e37278ff-0602-4f87-b60e-73d3dfeae484

One VDI that is been used show 

     xe vbd-list vdi-uuid=d4cc3000-c8f6-4048-b7f1-a5221b1c5538 | less
uuid ( RO)             : 110bf9bf-14c1-16fa-2f1c-6c4c75aac98c
          vm-uuid ( RO): 5884b6c2-e37f-af4f-0373-b029ec1ccca4
    vm-name-label ( RO): ArcGIS 10.1
         vdi-uuid ( RO): d4cc3000-c8f6-4048-b7f1-a5221b1c5538
            empty ( RO): false
           device ( RO): hda

Now just go ahead and remove it

     xe vdi-destroy uuid=e37278ff-0602-4f87-b60e-73d3dfeae484

If you encounter the error 
 This operation cannot be performed because the system does not manage this VDI
vdi: e37278ff-0602-4f87-b60e-73d3dfeae484 (base copy)

then you might be using NFS and you have to manually fo to yoru ount folder and delete the file

     ls -alh e37278ff-0602-4f87-b60e-73d3dfeae484.*
-rw-r--r-- 1 1024 users 46G Dec 31 18:56 e37278ff-0602-4f87-b60e-73d3dfeae484.vhd

then do the forget command

   xe vdi-forget uuid=e37278ff-0602-4f87-b60e-73d3dfeae484

Also you can make extra double sure that you are not deleting any vdi's used with

    xe vm-disk-list vm=testvm 





Thursday, 14 May 2015

Configuring Nagios Client nrpe

Solaris 11.2 or 10 has a repository that you can use to install the software to talk to you Nagios server.


pkgadd -d http://get.opencsw.org/now
/opt/csw/bin/pkgutil -U
/opt/csw/bin/pkgutil -y -i nrpe
/usr/sbin/pkgchk -L CSWnrpe # list files

edit
# vi /etc/opt/csw/nrpe.cfg
# svcadm restart svc:/network/cswnrpe


CentOS 7

Use the epel reposity

#yum -y install epel-release
#yum -y install nrpe nagios-plugins-all openssl

Edit /etc/nagios/nrpe.cfg file,

sudo vi /etc/nagios/nrpe.cfg
Add your Nagios server ip address:

[...]
## Find the following line and add the Nagios server IP xx
allowed_hosts=127.0.0.1, xxx.xxx.xxx.xxx

[...]


Start nrpe service on CentOS clients:

CentOS 7:

#systemctl start nrpe ; chkconfig nrpe on


CentOS 6.x:

#service nrpe start
#chkconfig nrpe on


Centos 7

# firewall-cmd --zone=public --add-port=5666/tcp --permanent

# firewall-cmd --reload



Centos 6

#iptables -A INPUT -s 192.168.0.100 -p tcp -m tcp --dport 5666 -m state --state NEW,ESTABLISHED -j ACCEPT
#service iptables save
#service iptables restart

Installing Desktop in Solaris 11.2

For SPARC the default iso it is a text installer. Does not include the solaris-desktop package.

#pkg  list -af | grep solaris-desktop
group/system/solaris-desktop                      0.5.11-0.175.2.0.0.42.0    ---

It does include the large server package that is good for a server only unit.

#pkg  list -af | grep solaris-large
group/system/solaris-large-server                 0.5.11-0.175.2.0.0.42.0    i--

Check our Boot Environments BE

# beadm list
BE               Active Mountpoint Space  Policy Created
--               ------ ---------- -----  ------ -------
solaris          NR     /          6.15G  static 2015-05-13 16:54

To install the desktop first we got to create a BE and ad the package.

# beadm create desktop
# beadm mount desktop /mnt
# pkg -R /mnt install group/system/solaris-desktop
# bootadm update-archive -R /mnt
# beadm umount desktop
# beadm activate desktop

Check that our new BE will be active at reboot.

# beadm list
BE               Active Mountpoint Space Policy Created
--               ------ ---------- ----- ------ -------
desktop          R      -          7.46G static 2015-05-14 09:15
solaris          N      /          4.43M static 2015-05-13 15:48

# reboot

# beadm list
BE               Active Mountpoint Space  Policy Created
--               ------ ---------- -----  ------ -------
desktop          NR     /          7.68G  static 2015-05-14 09:15

solaris          -      -          10.62M static 2015-05-13 15:48


If you need remote graphical login.

/etc/gdm/custom.conf file. This file will only be here after a reboot, using BE desktop.

[xdmcp]
Enable=true

Enable xvnc-inetd

# inetadm -e xvnc-inetd

restart the graphical login service (gdm) 

# svcadm restart svc:/application/graphical-login/gdm:default





Wednesday, 13 May 2015

Authenticating users in Solaris 11.2 against Active Directory

Install Solaris on sparc then add samba for Active directory authentication in Solaris

Add server zone name in ADS first

and apply the following command in host system

# echo "set ngroups_max=1024" >> /etc/system

#pkg install ntp

Create ntp.conf in /etc/inet

# vi ntp.conf
# provide ad dc IP
server your-ntp-usually-AD-DC

# svcadm enable network/ntp

done with host
----

Login into zone or guest domain. This works for both virtualization technologies
#zlogin guest
or
#telnet locahost 500x
once login into zone or guest.

#pkg install samba

Add DC controllers to vi /etc/hosts for Solaris and Linux

192.xx.xx.xx ad-dc ad-dc.domain
192.xx.xx.xx ad-dc2 ad-dc2.domain

enable dns/client

# svcadm enable svc:/network/dns/client:default

configure name services SMF - DNS client only in zones, for guest domains this is configured at setup. skip to "select name-service/switch"

#svccfg
svc:> select dns/client
svc:/network/dns/client> setprop config/search = astring: ("your-domain" "your-sub-domain")
svc:/network/dns/client:default> setprop config/nameserver = net_address: (xx.xx.xx.xx xx.xx.xx.xx)
svc:/network/dns/client:default> select dns/client:default
svc:/network/dns/client:default> refresh

---you only need to set the three properties below refresh and exit --

svc:/network/dns/client:default>select name-service/switch
svc:/system/name-service/switch> setprop config/host =astring: "files [SUCCESS=return] dns"
svc:/system/name-service/switch> setprop config/password = "files winbind"
svc:/system/name-service/switch> setprop config/group = "files [SUCCESS=return] winbind"

-- these properties should show as below -----

svc:/system/name-service/switch> setprop config/network = "files"
svc:/system/name-service/switch> setprop config/protocol = "files"
svc:/system/name-service/switch> setprop config/rpc = "files"
svc:/system/name-service/switch> setprop config/ether = "files"
svc:/system/name-service/switch> setprop config/netmask = "files"
svc:/system/name-service/switch> setprop config/bootparam = "files"
svc:/system/name-service/switch> setprop config/publickey = "files"
svc:/system/name-service/switch> setprop config/netgroup= "files"
svc:/system/name-service/switch> setprop config/automount = "files ldap"
svc:/system/name-service/switch> setprop config/alias = "files"
svc:/system/name-service/switch> setprop config/service = "files"
svc:/system/name-service/switch> setprop config/project = "files"
svc:/system/name-service/switch> setprop config/auth_attr = "files"
svc:/system/name-service/switch> setprop config/prof_attr = "files"
svc:/system/name-service/switch> setprop config/tnrhtp = "files"
svc:/system/name-service/switch> setprop config/tnrhdb = "files"
svc:/system/name-service/switch> setprop config/printer = "user files"

-----do this to save ------

svc:/system/name-service/switch> select system/name-service/switch:default
svc:/system/name-service/switch:default> refresh
svc:/system/name-service/switch:default> validate
svc:/network/dns/client:default> exit

Create a smb.conf file, this one will authenticate with winbind.

#vi /etc/samba.smb.conf

[global]
        workgroup = domain
   realm = domain.ca
   security = ads
   utmp = Yes
 
   idmap config * : range = 16777216-33554431
 
   winbind separator = +
   template shell = /usr/bin/bash  
   template homedir = /data/%U
   winbind use default domain = true
   winbind offline logon = yes

        unix charset = iso8859-15
        winbind nss info = rfc2307
        server string = somass  
        username map = /etc/samba/smbusers
# once this is working change log level to 1
        log level = 5
        log file = /var/samba/log/%m.log
        max log size = 50
        socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192
        preferred master = No
        winbind trusted domains only = no
        winbind enum users = yes
        winbind enum groups = yes
        winbind nested groups = Yes
        dns proxy = No

###################################### printing cups disable ############
load printers = no
#printing =
printcap name = /dev/null
disable spoolss = yes



Then create the smb.conf

# testparm -s /etc/samba/smb.master > /etc/samba/smb.conf

If you want to only allow a list of users use pam security. Otherwise any AD user can login into the system. Also works with AD groups. No need for a winbind separator or domain. Just type the AD group name on the list.

#vi /etc/security/pam_winbind.conf
#
[global]
# request a cached login if possible
# (needs "winbind offline logon = yes" in smb.conf)
cached_login = yes

#debug = yes
require_membership_of =joe, rolando
#Authomatically create home dir
mkhomedir = yes





create kerberos config file

#vi /etc/krb5/krb5.conf


[libdefaults]
ticket_lifetime = 24000
default_realm = DOMAIN.CA
#default_tgs_enctypes = RC4-HMAC DES-CBC-MD5 DES-CBC-CRC
# default_tkt_enctypes = RC4-HMAC DES-CBC-MD5 DES-CBC-CRC
default_tgs_enctypes = AES256-CTS-HMAC-SHA1-96 AES128-CTS-HMAC-SHA1-96 RC4-HMAC
    default_tkt_enctypes = AES256-CTS-HMAC-SHA1-96 AES128-CTS-HMAC-SHA1-96 RC4-HMAC
    permitted_enctypes = AES256-CTS-HMAC-SHA1-96 AES128-CTS-HMAC-SHA1-96 RC4-HMAC

allow_weak_crypto = true
# dns_lookup_realm = true
# dns_lookup_kdc = true

[realms]
DOMAIN = {

 kdc = xx.xx.xx.xx
 kdc = xx.xx.xx.xx


  admin_server = xx.xx.xx.xx
  }

[domain_realm]
.domain.ca= DOMAIN.CA
domain.ca= DOMAIN.CA
[kdc]
  profile = /var/kerberos/krb5kdc/kdc.conf

[appdefaults]

kinit = {
                renewable = true
                forwardable= true
        }


same for all OS but solaris gives a warning, check wiht klist to see if ticket was created
# kinit admin-user-in-ad@DOAMIN.CA




JOin domain same for all OS

# net join -w DOMAIN -U admin-user-in-ad
Enter admin-user-in-ad's password:
Using short domain name -- DOMAIN
Joined 'SOMASS' to dns domain 'DOMAIN'

configure PAM for SSO

#cp /etc/pam.conf-winbind /etc/pam.conf

note and modify the lines with pam_winbind.so.1 only

#vi /etc/pam.conf
#
login   auth sufficient         pam_winbind.so.1        try_first_pass
login   auth requisite          pam_authtok_get.so.1
login   auth required           pam_dhkeys.so.1
login   auth required           pam_unix_cred.so.1
#login   auth sufficient          pam_krb5.so.1
login   auth binding            pam_unix_auth.so.1      server_policy
login   auth required           pam_dial_auth.so.1
#
# rlogin service (explicit because of pam_rhost_auth)
#
rlogin  auth sufficient         pam_rhosts_auth.so.1
rlogin  auth requisite          pam_authtok_get.so.1
rlogin  auth required           pam_dhkeys.so.1
rlogin  auth required           pam_unix_cred.so.1
#rlogin auth sufficient         pam_winbind.so.1        try_first_pass
rlogin  auth required           pam_unix_auth.so.1
#
#
krlogin auth required           pam_unix_cred.so.1
krlogin auth required           pam_krb5.so.1
#
# rsh service (explicit because of pam_rhost_auth,
# and pam_unix_auth for meaningful pam_setcred)
#
rsh     auth sufficient         pam_rhosts_auth.so.1
rsh     auth required           pam_unix_cred.so.1
#rsh     auth sufficient         pam_winbind.so.1      try_first_pass
#
# Kerberized rsh service
#
krsh    auth required           pam_unix_cred.so.1
krsh    auth required           pam_krb5.so.1
#
ktelnet auth required           pam_unix_cred.so.1
ktelnet auth required           pam_krb5.so.1
#
# PPP service (explicit because of pam_dial_auth)
#
ppp     auth requisite          pam_authtok_get.so.1
ppp     auth required           pam_dhkeys.so.1
ppp     auth required           pam_unix_cred.so.1
ppp     auth required           pam_unix_auth.so.1
ppp     auth required           pam_dial_auth.so.1
#
#
gdm-autologin auth  required    pam_unix_cred.so.1
gdm-autologin auth  sufficient  pam_allow.so.1
#
# Default definitions for Authentication management
# Used when service name is not explicitly mentioned for authentication
#
other   auth sufficient         pam_winbind.so.1        try_first_pass
other   auth requisite          pam_authtok_get.so.1
other   auth required           pam_dhkeys.so.1
other   auth required           pam_unix_cred.so.1
#other   auth sufficient          pam_krb5.so.1
other   auth required           pam_unix_auth.so.1
#
#
passwd  auth binding            pam_passwd_auth.so.1    server_policy
cron    account required        pam_unix_account.so.1
cups    account required        pam_unix_account.so.1

gdm-autologin account  sufficient  pam_allow.so.1
#
other   account sufficient      pam_winbind.so.1    try_first_pass
other   account requisite       pam_roles.so.1
other   account binding         pam_unix_account.so.1   server_policy
#
# Default definition for Session management
# Used when service name is not explicitly mentioned for session management
#
other   session required        pam_unix_session.so.1
other   session sufficient      pam_winbind.so.1        try_first_pass
#
# Default definition for Password management
# Used when service name is not explicitly mentioned for password management
#
other   password required       pam_dhkeys.so.1
other   password requisite      pam_authtok_get.so.1
other   password requisite      pam_authtok_check.so.1  force_check
#other  password sufficient     pam_winbind.so.1        try_first_pass
other   password required       pam_authtok_store.so.1
other   account sufficient         pam_ldap.so.1



enable services for samba

#svcadm enable winbind samba
# svcs winbind samba swat wins
STATE          STIME    FMRI
disabled       Dec_03   svc:/network/swat:default
disabled       Dec_03   svc:/network/wins:default
online         14:41:00 svc:/network/samba:default
online         14:41:01 svc:/network/winbind:default

Check you see all the AD groups and Users
# wbinfo -g
# wbinfo -i "user"

add AD users to sudoers


#vi /etc/sudoers

User_Alias      CHIS = joe,rolando
CHIS ALL=(ALL) ALL
%AD-GROUP  ALL=(ALL)       ALL


test by login with AD user and sudo commands

Monitor your server with nagios

Installing guest domains ldm

Linux, guest domains or zones

Configure Host for LDOM
Installing guest domains Oracle Virtual server LDOM

If you are starting with a new SPARC System upgrade the firmware

There are many ways to skin a cat, Depending what you want accomplish.  If you wan tto be able to live migrate your VM to another Host, choose  option B below. Option C is preferred with one LUN and many VM but does not allow live migration. Easiest setup but not recommended for production is option A.

A-Create disk file in a Large LUN. You can have many VM's on this LUN.

1. create a disk image file

# mkfile -n 30g  /vmdsk/disk0.img
 
2. define vdisk to be done in each server from the pool

# ldm add-vdsdev /vmdsk/disk0.img vol0@vds0

B-for faster IO and best redundancy one LUN per VM

from luxadm probe get logical path

2. define vdisk to be done in each server from the pool

# ldm add-vdsdev /dev/rdsk/c0t60A9800044314F6C54244648362D3048d0s2 vol1@vds0

C- use zfs with zpool on LUN no live migration

zpool create ldompool /dev/rdsk/c3t500A098188567155d5

zfs create ldompool/somass

zfs create -V 30g ldompool/somass/OSsomass


Now that you decided how to use your storage we setup the guest Domains or VM's.


Using my preferred choice allows live migration of VMs

create a guest domain named somass.

# ldm add-domain somass

 add eight virtual CPUs to guest domain.

# ldm add-vcpu 2 somass

# ldm add-memory 2G somass

set properties on domain

#ldm set-var auto-boot\?=true somass

Previous section option B One LUN per VM for HA for each server on the pool
find disks with luxadm probe and use the Node WWN for luxadm display "Node WWN"
# luxadm probe
No Network Array enclosures found in /dev/es

Found Fibre Channel device(s):
  Node WWN:500a098088567155  Device Type:Disk device
    Logical Path:/dev/rdsk/c3t500A098188567155d0s2
    Logical Path:/dev/rdsk/c3t500A098198567155d0s2

With NetApp on iSCSI you can only see the serial number so match those to the LUN if your have a storage manager that only gives you the LUN numbers.

~# luxadm display 500a098088567155 | less
DEVICE PROPERTIES for disk: 500a098088567155
  Vendor:               NETAPP
  Product ID:           LUN
  Revision:             820a
  Serial Num:           D1OlK+FYPaUI
  Unformatted capacity: 512078.000 MBytes
  Read Cache:           Enabled
    Minimum prefetch:   0x0
    Maximum prefetch:   0x0
  Device Type:          Disk device
  Path(s):

  /dev/rdsk/c3t500A098188567155d0s2

Once you have match the serial numbers to the /dev/rdsk and size.

# ldm add-vdsdev /dev/rdsk/c3t500A098188567155d0s2 OSsomass@vds0
#ldm add-vdsdev /dev/rdsk/c3t500A098198567155d3s2 Datasomass@vds0

Note that all servers where the VM are to be migrated have to have the same vdsdev and vol name. You cannot copy and paste from one Host server to another Host if you are trying to do live migration, each host sees the iSCSI dev different. So the /dev/rdsk/  are different.

# ldm add-vdisk OSsomass OSsomass@vds0 somass

check for you new drives with ldm in both hosts servers for Volumes to and vsd to be the same.

#ldm ls-constraints
...
VDS
    NAME             VOLUME         OPTIONS          MPGROUP        DEVICE
    vds0          
                     sol11.2                                        /iso/sol-11_2-text-sparc.iso
                     OSsomass                                       /dev/rdsk/c3t500A098188567155d5s2
                     Datasomass                                     /dev/rdsk/c3t500A098198567155d3s2



# ldm set-var boot-device=OSsomass somass

For a VLAN tags configure the tagged VLAN with vid. Configured the same for all servers to host the VM. if you are not using Vlans leave out the pvid and vid. No need for a VLAN on guest eaither.

# ldm add-vsw pvid=1 vid=3,8 net-dev=net2 chis2-vsw primary

# ldm add-vnet pvid=1 vid=3,8 vnet2 chis2-vsw somass

Once again use ldm ls-constraints to see that your setting match on both servers for live migration.

# ldm ls-constraints

DOMAIN
primary
......
VSW
    NAME             MAC               NET-DEV   ID   DEVICE     LINKPROP   DEFAULT-VLAN-ID PVID VID                  MTU   MODE   INTER-VNET-LINK
    chis2-vsw                          net2      1    switch@1                                                                1               1    3,8 
DOMAIN
somass
........
NETWORK
    NAME             SERVICE                     ID   DEVICE     MAC               MODE   PVID VID                  MTU   MAXBW      LINKPROP
    vnet2           chis2-vsw                           0                                                                      1    3,8   

Check for correct VLAN to be configured and same names on VSW (virtual switch server) in both servers for live migration.

# ldm bind-domain somass
# ldm list-domain somass

Install OS from iso create a temporary device and attached to VM. I download and stored the iso on a folder I can add this iso as a virtual device disk, Keep it as a device for future installations.

#ldm add-vdsdev /vmdsk/sol-11_2-text-sparc.iso  iso@vds0
#ldm add-vdisk s11-dvd iso@vds0 somass

Start the guest domain

# ldm start somass

Check CONS number with

#ldm ls
NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  NORM  UPTIME
somass           active          -n----    5000                2     8G       0.9%  0.9%  

#telnet localhost 5000

{0} ok devalias

find disk with label s11-dvd

{0} ok boot  s11-dvd

Configure as it suits your needs. If you are using VLANs then you need to configure as indicated below.

On guest domain VM somass:

# dladm create-vlan -l net0 -v3
# dladm show-link
LINK                CLASS     MTU    STATE    OVER
net0                   phys      1500   up       --
net3000             vlan      1500   up       net0

# ipadm create-ip net3000
# ipadm delete-ip net0
# ipadm create-addr -T static -a 192.168.1.3/24 net3001


remove iso after installation

#ldm remove-vdisk s11-dvd somass

user format to find drive inside vm and create a new zpool. This is Datasomass@vds0

~# format
Searching for disks...done

c1d2: configured with capacity of 2048.00GB


AVAILABLE DISK SELECTIONS:
       0. c1d1 <NETAPP-LUN-820a-30.00GB>
          /virtual-devices@100/channel-devices@200/disk@1
       1. c1d2 <NETAPP-LUN-820a-2.00TB>
          /virtual-devices@100/channel-devices@200/disk@2
Specify disk (enter its number):

then Ctrl C

#zpool create data c1d2
# zpool list
NAME    SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
data   1.98T   118K  1.98T   0%  1.00x  ONLINE  -
rpool  29.8G  10.3G  19.4G  34%  1.00x  ONLINE  -

test the live migration

#ldm migrate somass my-secondhost'

If you want to authenticate against AD

If you need to use gnome or the desktop package

Monitor your server with nagios

LDOM configuration, VM Server for SPARC

LDM configuration and installation

Enable ldm and vntsd

# svcadm enable vntsd
# svcadm enable ldmd


create virtual console concentrator

# ldm add-vcc port-range=5000-5100 vcc0 primary

create virtual disk server same name for all servers in a pool *

# ldm add-vds vds0 primary

create virtual switch server wiht the same name on any server to accept migrations *

#ldm add-vsw net-dev=net2 pgc-vsw primary

configure control domain

# ldm set-vcpu 8 primary

Start a delayed reconfig

#  ldm start-reconf primary

set memory for the primary domain
# ldm set-memory 4G primary

Add a logical domain config

# ldm add-config initial
# ldm list-config

# shutdown -y -g0 -i6

Install Solaris as a guest domain


Installation of Solaris 11.2 on SPARC

Installation of Solaris 11.2

ssh to host ILOM

ssh root@ILOM ip

->set /HOST/bootmode script="setenv auto-boot? false"

->reset /SYS

change to console from ILOM launch console  button, to the ok prompt. I am booting from a remove dvd. Set with the ILOM interface

{0} ok boot rcdrom


after installation starts choose the defaut US-English for lenguage and keyboard

Choose local disks

Choose the first disk /SYS/SASBP/HDD0

and make note of the other disks labels . You will need them after.

use the entire disk
set name nrnbcvicsopxxx

choose netowrk setup manual
Configure net0 (igb0)

IP
Netmask
router

Configure dns

search
 my.domain.com

Alternate Name service
 none
Time Zone
 Americas > canada > Pacific time - west BC
Locale
 English
territory
 US
set time
KEyboard
 US-english

Set root and one user at least

support and registration...
no proxy

after installation got back to the ILOM


before rebooting..

->set /HOST/bootmode script="setenv auto-boot? false"

System should boot now in the new install

enable sar

# svcadm enable svc:/system/sar:default

# crontab -e sys

remove comments on the three sar lines

System is now ready

Configure LDOM (VMserver for Sparc)