2018-09-14

Finding your LPAR/s what HMC and Managed Systems it is running on

It help a lot for system administrators to run a script in order to find the LPAR is in.  And I used this whenever someone asks me to find them, so it's really quite handy.

hscroot@phmc01:~> for sys in $(lssyscfg -r sys -F "name"); do echo -e "$sys\n---"; lssyscfg -m $sys -r lpar -F "name,state"; echo; done
SYPWR02
---
DSAP24,Running
QSAP35,Running
PSAP96,Running
PVIO13B,Running
PSAP98,Running
PVIO13A,Running
DLNX03,Running

SYPWR04
---
PRD48,Not Activated
DTST04,Not Activated
PSAP32,Running
PSAP30,Running
PSAP26,Running
PSAP57,Running
PSAP55,Running
PSAP53,Running
PSAP07,Running
PSAP94,Not Activated
PSAP05,Running
PSAP92,Not Activated
PSAP90,Running
PSAP88,Running
PMQ02,Running
PSAP47,Running
PCSDK02,Running
PSAP45,Running
PNFS02,Running
PSAP35,Running
PSAP75,Not Activated
PVIO06B,Running
PSAP33,Running
PVIO06A,Running

SYPWR06
---
DSAP09,Running
QSAP05,Running
CSAP06,Running
QSAP04,Running
QSAP03,Running
QSAP22,Running
PRD45,Running
DTST03,Running
DSAP11,Running
DSAP17,Running
DSAP16,Running
DSAP15,Running
QSAP18,Running
PTWSDB01,Running
PSAP14,Not Activated

2018-08-03

The -native in inq command

I was asked by our storage guy why he cannot see (from the service request) the old VMAXes.
# inq -nodots -sym_wwn | head -n 20
-------------------------------------------------------------------------
Symmetrix DeviceSymm Serial #  Device #  WWN             
-------------------------------------------------------------------------
/dev/rhdisk2    000295700781   01810     60000970000295700781533031383130
/dev/rhdisk3    000295700916   01919     60000970000295700916533031393139
/dev/rhdisk4    000295700781   09D54     60000970000295700781533039443534
/dev/rhdisk5    000295700916   08CF4     60000970000295700916533038434634
/dev/rhdisk6    000295700781   016E7     60000970000295700781533031364537
/dev/rhdisk7    000295700916   06DB0     60000970000295700916533036444230
/dev/rhdisk8    000295700781   05794     60000970000295700781533035373934
/dev/rhdisk9    000295700781   0579C     60000970000295700781533035373943
/dev/rhdisk10   000295700916   09A1A     60000970000295700916533039413141
/dev/rhdisk11   000295700916   09A22     60000970000295700916533039413232
/dev/rhdisk12   000295700781   052A8     60000970000295700781533035324138

And I missed using the "-native" command.
# inq -nodots -native -sym_wwn | head -n 20
-------------------------------------------------------------------------
Symmetrix DeviceSymm Serial #  Device #  WWN             
-------------------------------------------------------------------------
/dev/rhdisk2    000297700113   008EF     60000970000295700781533031383130
/dev/rhdisk3    000297700107   0089D     60000970000295700916533031393139
/dev/rhdisk4    000297700113   008F3     60000970000295700781533039443534
/dev/rhdisk5    000297700107   008A0     60000970000295700916533038434634
/dev/rhdisk6    000297700113   008EE     60000970000295700781533031364537
/dev/rhdisk7    000297700107   0089F     60000970000295700916533036444230
/dev/rhdisk8    000297700113   008F1     60000970000295700781533035373934
/dev/rhdisk9    000297700113   008F2     60000970000295700781533035373943
/dev/rhdisk10   000297700107   008A1     60000970000295700916533039413141
/dev/rhdisk11   000297700107   008A2     60000970000295700916533039413232
/dev/rhdisk12   000297700113   008F0     60000970000295700781533035324138

Thanks to my senior colleague, as this is very useful especially if the storage gone through migration.  This extra parameter seems not existing in the -help though.
# inq -help | grep -i native | wc -l
       0


2018-06-26

Problem: nothing provides boost-devel needed by qpid-cpp-client-1.35.0-1.x86_64

Posting a note on one of the servers I needed to install apigee with boost-devel libraries.

# zypper in apigee-qpidd*
Refreshing service 'SUSE_Linux_Enterprise_Server_12_SP3_x86_64'.
Loading repository data...
Reading installed packages...
Resolving package dependencies...
Problem: nothing provides boost-devel needed by qpid-cpp-client-1.35.0-1.x86_64
 Solution 1: do not install apigee-qpidd-4.17.05-0.0.826.noarch
 Solution 2: break qpid-cpp-client-1.35.0-1.x86_64 by ignoring some of its dependencies
Choose from above solutions by number or cancel [1/2/c] (c): c

Added this repo (from this link)

# zypper addrepo https://download.opensuse.org/repositories/devel:libraries:c_c++/openSUSE_Leap_42.3/devel:libraries:c_c++.repo
# zypper refresh
# zypper in apigee-qpidd*
Refreshing service 'SUSE_Linux_Enterprise_Server_12_SP3_x86_64'.
Loading repository data...
Reading installed packages...
Resolving package dependencies...
The following 17 NEW packages are going to be installed:
  apigee-qpidd libboost_headers1_67_0-devel librdmacm1 libstdc++-devel python-qpid python-qpid-common python-qpid-qmf python-saslwrapper qpid-cpp-client qpid-cpp-client-rdma qpid-cpp-server
  qpid-cpp-server-linearstore qpid-proton-c qpid-qmf qpid-tools saslwrapper xqilla
The following 15 packages have no support information from their vendor:
  apigee-qpidd libboost_headers1_67_0-devel python-qpid python-qpid-common python-qpid-qmf python-saslwrapper qpid-cpp-client qpid-cpp-client-rdma qpid-cpp-server qpid-cpp-server-linearstore qpid-proton-c
  qpid-qmf qpid-tools saslwrapper xqilla
17 new packages to install.
Overall download size: 54.7 MiB. Already cached: 0 B. After the operation, additional 348.9 MiB will be used.
Continue? [y/n/...? shows all options] (y):
Retrieving package libstdc++-devel-4.8-6.189.x86_64                                                                                                                      (1/17),   4.8 KiB (   72   B unpacked)
Retrieving: libstdc++-devel-4.8-6.189.x86_64.rpm ........................................................................................................................................................[done]
Retrieving package librdmacm1-14-8.11.1.x86_64                                                                                                                           (2/17),  44.1 KiB ( 87.6 KiB unpacked)
Retrieving: librdmacm1-14-8.11.1.x86_64.rpm .............................................................................................................................................................[done]
Retrieving package python-qpid-common-1.35.0-1.noarch                                                                                                                    (3/17), 235.7 KiB (  1.2 MiB unpacked)
Retrieving package qpid-proton-c-0.14.0-1.x86_64                                                                                                                         (4/17), 424.2 KiB (  1.6 MiB unpacked)
Retrieving package saslwrapper-0.22-1.x86_64                                                                                                                             (5/17),  55.5 KiB (205.4 KiB unpacked)
Retrieving package xqilla-2.3.3-1.x86_64                                                                                                                                 (6/17),   8.4 MiB ( 43.5 MiB unpacked)
Retrieving package python-saslwrapper-0.22-1.x86_64                                                                                                                      (7/17),  75.8 KiB (277.9 KiB unpacked)
Retrieving package python-qpid-1.35.0-1.noarch                                                                                                                           (8/17),  79.0 KiB (306.9 KiB unpacked)
Retrieving package libboost_headers1_67_0-devel-1.67.0-235.2.x86_64                                                                                                      (9/17),   9.2 MiB (113.6 MiB unpacked)
Retrieving: libboost_headers1_67_0-devel-1.67.0-235.2.x86_64.rpm ............................................................................................................................[done (9.6 MiB/s)]
Retrieving package qpid-cpp-client-1.35.0-1.x86_64                                                                                                                      (10/17),  11.6 MiB ( 61.6 MiB unpacked)
Retrieving package qpid-cpp-client-rdma-1.35.0-1.x86_64                                                                                                                 (11/17), 489.2 KiB (  2.2 MiB unpacked)
Retrieving package python-qpid-qmf-1.35.0-1.x86_64                                                                                                                      (12/17), 425.2 KiB (  2.2 MiB unpacked)
Retrieving package qpid-cpp-server-linearstore-1.35.0-1.x86_64                                                                                                          (13/17),   1.8 MiB (  9.1 MiB unpacked)
Retrieving package qpid-qmf-1.35.0-1.x86_64                                                                                                                             (14/17),   1.2 MiB (  6.0 MiB unpacked)
Retrieving package qpid-cpp-server-1.35.0-1.x86_64                                                                                                                      (15/17),  20.5 MiB (106.4 MiB unpacked)
Retrieving package qpid-tools-1.35.0-1.noarch                                                                                                                           (16/17), 111.2 KiB (548.6 KiB unpacked)
Retrieving package apigee-qpidd-4.17.05-0.0.826.noarch                                                                                                                  (17/17),  17.1 KiB ( 35.3 KiB unpacked)
Checking for file conflicts: ............................................................................................................................................................................[done]

2018-06-12

Extend /tmp on Solaris

Need to add +2G to my /tmp in order to do Netbackup upgrade (v. 7.7.3).  Well in this case, I am getting some failure in upgrading so I decided to install it using "install_client_files sftp $CLIENT mmond"

mmond@sehp01:~$ df -h /tmp
Filesystem             Size   Used  Available Capacity  Mounted on
swap                   374M   280M        94M    75%    /tmp
mmond@sehp01:~$ sudo zfs list rpool
NAME    USED  AVAIL  REFER  MOUNTPOINT
rpool  45.0G  3.94G  76.5K  /rpool
mmond@sehp01:~$ sudo zfs create -p -V 2G rpool/swapextra
mmond@sehp01:~$ sudo swap -a /dev/zvol/dsk/rpool/swapextra
mmond@sehp01:~$ sudo swap -lh
swapfile                      dev            swaplo      blocks        free
/dev/zvol/dsk/rpool/swap      301,1              8K         10G        3.2G
/dev/zvol/dsk/rpool/swapextra 301,10             8K        2.0G        2.0G 

After the upgrade I just removed it file used to extend my /tmp.

mmond@sehp01:~$ sudo swap -d /dev/zvol/dsk/rpool/swapextra
mmond@sehp01:~$ sudo zfs destroy rpool/swapextra


2018-05-28

The agent installer detected an existing device record 'Server ID 188440001 (skttp1ftp01.int.skat.dk)' in the core that has the same chassis ID

For the past few days, I struggled with a problem on deploying 5 servers via HP Server Automation (HPSA) tool.

Checked the following together with our data center guy:

  1. VLAN has properly been identified and configured.
  2. Can ping both directions (servers and jumphosts) and can access/download updates from Red Hat, so I know the VLAN is indeed working as expected.

Then I found out some errors by looking at the logs (/var/log/opsware/agent/*)

[10/Jan/2010 00:49:29 +0200] INFO "SSLError: ('error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed', 336134278, 'Hint: 1. User is not root, permission denied to open port.', 'Hint: 2. System time far away in the future or past might be the root cause of this error.')

I just noticed that the date is quite old and when I check, the server I am deploying don't have the correct system date/time.  So I just did a

# systemctl stop ntpd
# ntpdate time.csn.local
# systemctl start ntpd

And another one (on other server after even after correcting the system date/time).

# tailf agent.err
  method:  updateDevice
  module:  spinmethods.py
  params:  {'msg': "The agent installer detected an existing device record 'Server ID 188440001 (sktp1fftp0ma02.ccta.dk)' in the core that has the same chassis ID; the installer attempted to reclaim this device record but failed to because the device record is not configured to accept a request to issue a certificate.\n\nTo solve this problem, please refer to knowledge base article: http://support.openview.hp.com/selfsolve/document/KM546387\n"}
  request:  UNKNOWN
  tb_chain:  [[{'function': '_call', 'line': 178, 'file': './shadrpc.py'}, {'function': 'call', 'line': 74, 'file': './spinrpc.py'}, {'function': '__call__', 'line': 384, 'file': './spinmethods.py'}, {'function': 'handle', 'line': 720, 'file': './spinmethods.py'}, {'function': 'inner', 'line': 7210, 'file': './spinmethods.py'}, {'function': 'updateDevice', 'line': 9753, 'file': './spinmethods.py'}, {'function': 'updateDevice', 'line': 7522, 'file': './spinmethods.py'}]]
  timestamp:  26/May/2018 200334
  timeticks:  None

So I needed to:
  1. Deactivate and unregister the SA Agent in HPSA
  2. Uninstall the agent on the server
  3. And re-register it using "--force_new_device"
# /opt/opsware/agent/bin/agent_uninstall.sh
./opsware-agent-60.0.62732.1-linux-7SERVER-X86_64 --opsw_gw_addr 152.103.214.100:3001 --force_new_device

About sysdumpdev on AIX

[qsap22:root]/ # lsvg -l rootvg |grep dump
livedump            jfs2       2       4       2    open/syncd    /var/adm/ras/livedump
lg_dump_p_lv        dump       16      16      1    open/syncd    N/A
lg_dump_s_lv        dump       16      16      1    open/syncd    N/A
[qsap22:root]/ # lsvg rootvg |grep STALE
STALE PVs:          1                        STALE PPs:      40
[qsap22:root]/ # sysdumpdev -P -s /dev/sysdumpnull
primary              /dev/lg_dump_p_lv
secondary            /dev/sysdumpnull
copy directory       /var/adm/ras
forced copy flag     TRUE
always allow dump    TRUE
dump compression     ON
type of dump         fw-assisted
full memory dump     disallow
[qsap22:root]/ # sysdumpdev -P -p /dev/sysdumpnull
primary              /dev/sysdumpnull
secondary            /dev/sysdumpnull
copy directory       /var/adm/ras
forced copy flag     TRUE
always allow dump    TRUE
dump compression     ON
type of dump         fw-assisted (suspend)
full memory dump     disallow
[qsap22:root]/ # varyonvg rootvg
[qsap22:root]/ # sysdumpdev -P -s /dev/lg_dump_s_lv
primary              /dev/sysdumpnull
secondary            /dev/lg_dump_s_lv
copy directory       /var/adm/ras
forced copy flag     TRUE
always allow dump    TRUE
dump compression     ON
type of dump         fw-assisted (suspend)
full memory dump     disallow
[qsap22:root]/ # sysdumpdev -P -p /dev/lg_dump_p_lv
primary              /dev/lg_dump_p_lv
secondary            /dev/lg_dump_s_lv
copy directory       /var/adm/ras
forced copy flag     TRUE
always allow dump    TRUE
dump compression     ON
type of dump         fw-assisted
full memory dump     disallow
[qsap22:root]/ # while true;do lsvg rootvg |grep STALE; sleep 5;done
STALE PVs:          1                        STALE PPs:      34
STALE PVs:          1                        STALE PPs:      33
STALE PVs:          1                        STALE PPs:      31
STALE PVs:          1                        STALE PPs:      29
STALE PVs:          1                        STALE PPs:      27
STALE PVs:          1                        STALE PPs:      24
STALE PVs:          1                        STALE PPs:      22
STALE PVs:          1                        STALE PPs:      20
STALE PVs:          1                        STALE PPs:      18
STALE PVs:          1                        STALE PPs:      16
STALE PVs:          1                        STALE PPs:      13
STALE PVs:          1                        STALE PPs:      11
STALE PVs:          1                        STALE PPs:      9
STALE PVs:          1                        STALE PPs:      7
STALE PVs:          1                        STALE PPs:      7
STALE PVs:          1                        STALE PPs:      7
STALE PVs:          1                        STALE PPs:      5
STALE PVs:          1                        STALE PPs:      4
STALE PVs:          1                        STALE PPs:      3
STALE PVs:          1                        STALE PPs:      3
STALE PVs:          1                        STALE PPs:      2
STALE PVs:          0                        STALE PPs:      0
STALE PVs:          0                        STALE PPs:      0
STALE PVs:          0                        STALE PPs:      0
STALE PVs:          0                        STALE PPs:      0
STALE PVs:          0                        STALE PPs:      0

# Show mapping VIO

[qsap25:root]/ # echo "cvai" | kdb -script |grep vscsi
read vscsi_scsi_ptrs OK, ptr = 0xF1000000C014CE10
vscsi0     0x000007 0x0000000000 0x0                PVIO09A->vhost12
vscsi1     0x000007 0x0000000000 0x0                PVIO09B->vhost12

smitty mpio -> Path Management -> enable all paths helps often in similar situations (mostly data disks, this time we had rootvg)

# Data disks

[qsap25:root]/ for adapter in `lsdev -Ccadapter |grep ^fcs |awk '{print $1}'`; do echo $adapter: `echo "vfcs $adapter" |kdb |grep host_name |awk '{print $2,$4}'`; done

2018-03-01

Using Jumpcloud's LDAP-as-a-Service + Multi-Factor Authentication for SSH Login in Linux

I am doing a Proof-of-Concept for my personal project using Jumpcloud's LDAP-as-a-Service.  Hence it's free for 10 users.  In the long run, I plan to setup my home lab and create my virtual office in there.

  1. Spin a Linux virtual machine in Linode.
  2. Create an account in Jumpcloud.
  3. Enabled Multi-Factor Authentication (MFA) in emporium Linux system via Jumpcloud's portal.  In this case, I used Google Authenticator tool (that's why it asks for Verification code below upon login).
Here's the sample output of the setup I have done.  I will try to make it more detailed soon.


[mmond@nx ~]$ ssh michaelm@112.10.223.211
Verification code:
Password:
Last login: Thu Mar  1 08:30:24 2018

[michaelm@europium ~]$
[michaelm@europium ~]$ sudo su -
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.
[sudo] password for michaelm:
Last login: Thu Mar  1 08:17:22 UTC 2018 on pts/1

[root@europium ~]# ldapwhoami -H "ldaps://ldap.jumpcloud.com" -D "uid=michaelm,ou=Users,o=3367e67801a2368b19d42664,dc=jumpcloud,dc=com" -x -W
Enter LDAP Password:
dn:uid=michaelm,ou=Users,o=3367e67801a2368b19d42664,dc=jumpcloud,dc=com