From popdata
Jump to: navigation, search



Adding a mirrored drive online

apt-get install hpacucli
ctrl slot=0 show config
ctrl slot=0 create type=ld drives=1I:1:3,1I:1:4 raid=1

ILO Server Management

HP ILO User Guide HP ILO Command Reference

All the HP servers have an Integrated Lights Out management interface. Through this interface you have access to:

  • environmental information about the machine (fans, temperature, event logs)
  • virtual power (turn the machine on/off)
  • virtual console (access to the bios, OS serial console)

You can access the ILO in two ways:

Secure Shell

ssh -x -a -l pd-admin

(if you get the password wrong twice, you have to wait a couple minutes before trying again)

Then you have access to a number of commands

VSP - Virtual Serial Port. This gives you access to the second serial port.

POWER - Virtual Power.

  • power on the server: power on
  • power off the server: power off
  • warm boot: power warm

Web Browser

Resetting the ILO

  • install software: apt-get install hponcfg
  • read config: hponcfg -w /tmp/out.xml; cat /tmp/out.xml
  • reset password
    • make config file /tmp/cfg.xml
<ribcl VERSION="2.0">
<login USER_LOGIN="Administrator" PASSWORD="password">
<user_INFO MODE="write">
<mod_USER USER_LOGIN="Administrator">
<password value="NewPassword33"/>
    • hponcfg -f /tmp/cfg.xml
  • add a user file (as above)
<ribcl VERSION="2.0">
<login USER_LOGIN="Administrator" PASSWORD="password">
<user_INFO MODE="write">
 <reset_SERVER_PRIV value = "Y" />
 <admin_PRIV value = "Y" />
 <REMOTE_CONS_PRIV value="Y" />
 <RESET_SERVER_PRIV value="Y" />
 <CONFIG_ILO_PRIV value="Y" />
  • set the network (as above)
<LOGIN USER_LOGIN="user" PASSWORD="password">
<PRIM_DNS_SERVER value = ""/>

Check hardware RAID drives

# hpacucli
HP Array Configuration Utility CLI
Detecting Controllers...Done.
Type "help" for a list of supported commands.
Type "exit" to close the console.

=> ctrl slot=0 show config

Smart Array P420i in Slot 0 (Embedded)    (sn: 5001438020CE7A40)

   array A (Solid State SATA, Unused Space: 0  MB)

      logicaldrive 1 (3.6 TB, RAID 1+0, Recovering, 7% complete)

      physicaldrive 1I:1:1 (port 1I:box 1:bay 1, Solid State SATA, 1 TB, OK)
      physicaldrive 1I:1:2 (port 1I:box 1:bay 2, Solid State SATA, 1 TB, OK)
      physicaldrive 1I:1:3 (port 1I:box 1:bay 3, Solid State SATA, 1 TB, OK)
      physicaldrive 1I:1:4 (port 1I:box 1:bay 4, Solid State SATA, 1 TB, OK)
      physicaldrive 2I:1:5 (port 2I:box 1:bay 5, Solid State SATA, 1 TB, OK)
      physicaldrive 2I:1:6 (port 2I:box 1:bay 6, Solid State SATA, 1 TB, OK)
      physicaldrive 2I:1:7 (port 2I:box 1:bay 7, Solid State SATA, 1 TB, OK)
      physicaldrive 2I:1:8 (port 2I:box 1:bay 8, Solid State SATA, 1 TB, Rebuilding)

   SEP (Vendor ID PMCSIERA, Model SRCv8x6G) 380 (WWID: 5001438020CE7A4F)

=> ctrl slot=0 show config detail

Encrypted Partitions George/Defuca/Sherry


  • iscsi devices must have already been discovered and present on the system for the scripts below to function. If the system has already used iscsi devices this should happen without any additional intervention.
  • I have yet to find a way to discover new iscsi partitions without threatening to rename existing devices and partitions (at least with Redhat). I would suggest that you unmount and remove decrypted references to existing partitions and restart the iscsi system.

  • iscsi devices can be discovered as different scsi devices with each reboot or discovery attempt. Encrypted partitions can not rely on file system labels as the file system has not yet been decrypted. Other potential ids associated with devices may also change. The scripts described below scan all scsi partitions for luksUUIDs which match those expected, and mount/unmount them predictably.

Add a new encrypted partition on a system with existing encrypted partitions

If you sudo make sure you su - afterwards as you must be the root user to reference the correct gpg keys.

If its a SAN slice reboot to allow iscsi to pick up the new partition (warning scanning iscsi devices may shift existing devices) avoid doing this task without a reboot.

Scan for a new device with no parititions

fdisk -l

Create a new partition table:

fdisk /dev/sd#

Use fdisk to create a new linux partition

Add the encryption layer:

cryptsetup luksFormat /dev/sd##

Note the passphrase

Save a copy of the existing readbuffer.c.gpg

cp readbuffer.c.gpg readbuffer.c.gpg`date`

Add the passphrase to the master encrypted key file:

gpg -r mounter --decrypt < readbuffer.c.gpg > foo.c

edit foo.c add add a line in the format reference-name pass phrase

where pass phrase is the passphrase given when crypting the encrypted device in cryptsetup above

Recreate the encrypted file

gpg -r mounter --encrypt < foo.c > readbuffer.c.gpg

if successful remove foo.c

Determine the luksUUID for the parition

cryptsetup luksUUID /dev/sd##

Add to the configuration


Format: luksUUID pass-phrase-reference-name mount-point

Create a reference to the encrypted device and create the filesystem.

cryptsetup luksOpen /dev/sd## reference-name

mkfs.ext3 /dev/mapper/reference-name
tune2fs -c0 /dev/mapper/reference-name
cryptsetup luksClose reference-name should now mount your device

On defuca george sherry currently:

  • Encrypted partitions are mounted only once even if they are discovered multiple times.
  • The script must be run manually after boot to provide the passphrases for the encrypted partitions


  • Is a configuration file that contains UUIDs associated with luks partitions
  • A name for the decrypted reference point
  • The location the encrypted partition should be mounted at


  • run from su - or sudo -i to have an appropriate path
  • execute and provide the passphrase that provides access to the individual passphrases
  • for encrypted partitions.
  • scans partitions of scsi devices looking for luks partitions
  • creates a file luks_uuids_discovered.dat
  • matches found luksUUIDs within luks_uuids.dat
  • encrypted individual pass phrases stored in /root/encyption/readbuffer.c.gpg
  • One passphrase for readbuffer.c.gpg provides system access to the individual
  • passphrases to decrypt and mount the established encrypted partitions
    • decrypts partitions
    • mounts the partitions



  • unmounts encrypted partitions that are within luks_uuids.dat and mounted
  • removes decrypted reference point

Changing Passphrases

In general, you:

  • unmount the partition
    • unmount /mount/point
  • remove the decrypted reference point (review luks_uuids.dat or /dev/mapper/*)
    • cryptsetup remove name
  • determine the SCSI device the partition is on
    • Review /root/encrypted/luks_uuids_detected.dat and luks_uuids.dat
  • determine the position of the current luks key.
    • cryptsetup luksDump /dev/sd##
      • if multiples exist you need to determine the position of the key to remove
  • add a luks key
    • cryptsetup luksAddKey /dev/sd##
      • (provide the existing pass phrase before providing a new pass phrase)
  • check that the key works by decrypting and mounting using the new key
    • (you can run /root/encrypted/ to remount)
  • unmount and remove the decrypted reference point as done above
  • remove the old key
    • cryptsetup luksDelKey /dev/sd## #
      • where the last # is the key position

Milo Tape backup after power outage

  • The tape library powers up slowly and is not usually in a ready state when Milo scans for scsi devices
  • Easiest fix, reboot milo once after the tape library has fully powered up.
  • Alternative if the tape library scsi devices was detected (/dev/sg[0123456] should exist):
        you can restart the arkeia daemon (service arkeia restart)
  • Alternative if the tape library scsi devices was not detected (/dev/sg[0123456] do not exist):
        (easiest to reboot)
      sometimes you can rescan the scsi bus as per[Redhat]

Milo Areca RAID utility

  • utility named cli32
  • Notes from last failure:

David requested I type up information relevant to the failure on milo. Originally, David identified that milo was providing an audible alert different then the typical redundant power supply issues.

After we reviewed the raid status was degraded. Pertinent information:

Milo uses the areca raid controller. It has a utility named "cli32". There are help screens within the utility or a pdf is available on the Areca site describing usage. Within cli32:

   event info
    vsf info confirmed the volumes were all degraded.

Displayed a log indicating the Raid was degraded at 13:18 on June 4th and IDE Channel #3 was removed.

   disk info

Identified that there was a "free" drive in channel 12 and channel 3 was missing.

We noted all the serial numbers for the active drives. Powered down the server and pulled the failed drive confirming the serial number was not in our active list. Wanting to keep all the active drives together we moved the "free" drive into the slot for the failed drive and booted. The "free" drive was not listed in the raid utility.

We powered down the system and moved the "free" drive back to its original slot. It was recognized. We marked the free drive as a hot spare with the command:

rsf createhs drv=12

Using the command

vfs info

The raid array changed status to Rebuilding right after declaring the free drive as a hot spare. I will periodically check the status and send David a confirmation email when the array has finished rebuilding.

As the "free" drive did not work in slot 3. There was some speculation that the failed drive itself may be ok and it may be the cable but we did not explore further.

Setting up the new SANs

  • Two mirrored 2TB drives (have to be smaller than 2TB to be able to use normal bios to boot.
  • Rest of drives are RAID6 arrays
  • raids can be configured during boot (hitting some key brings up the utility)
  • standard fai install server. Image needs to have the 3-ware firmware in the initrd.img (it does)
  • add packages: apt-get install tgt gparted saidar
  • add the 3-ware tools: