Whether you are new to AIX or have been working with it for decades, since the logical volume manager (LVM) was introduced in version 3.0 (1989), it remains one of the most powerful features (personal opinion?) of the operating system. Most of the setup and manipulation of storage can be managed using the high-level commands with which we are all familiar (mkvg, lsvg, mklv, lspv, chdev). However, occasionally, one will run into LVM oddities that require more advanced AIX administrative skills. The following information outlines one such situation I experienced recently, regarding a mismatch of a PVID and the VGDA, and was the first time in 22 years of working with the product I had to resort to such drastic manipulations of AIX storage to fix it.
What is the PVID?
PVID is an acronym that expands to Physical Volume IDentification, and it is a unique number generated by the operating system at the time it is added to a volume group. The number itself is a 16-byte hexadecimal number consisting of two groups of eight characters. The first eight match the first eight characters of the machine ID (uname –m) on which the PVID was first generated. The second eight are derived based on time decay that cycles periodically. The method used to generate the second half of the PVID is such that IBM can nearly guarantee you will never see an overlap. We nearly always use the AIX symbolic name (hdisk#) when administrating AIX systems. However, it is useful to know, for most commands used to manipulate the LVM, if not all of them, the PVID may be substituted for the symbolic hdisk designation.
Where is PVID stored?
I’m sure this is not a comprehensive listing, but for the purposes of this discussion, it is complete enough. The PVID is not only recorded in our beloved ODM (object data manager), but is also part of the VGDA (volume group descriptor area) which is written to each physical volume. The most common way to display the PVID for a volume is via the lspv command. The PVID, if one has been assigned, is displayed in the second field of the output, and if one has not been assigned, the word none appears as the second field value.
How is a PVID generated?
The most common way to generate a PVID is to simply add it to a volume group. Alternatively, the chdev command with the pv attribute flag set to yes (chdev –l hdisk# -a pv=yes) may be used to assign a PVID to an hdisk without adding it to a VG. Conversely, the chdev command may be used to clear an offending PVID if necessary (chdev –l hdisk# -a pv=clear). This is useful when you are cloning volume groups from one system to another, and it is required when cloning and using them on the source system via the recreatevg process.
As I mentioned above, most of the time, manipulating physical volumes (PVs) with the ch, mk, rm commands are enough to get you out of trouble. Occasionally, it will be necessary to delve into the ODM (odmget, odmput) to straighten out LVM issues. However, most of the “normal” techniques fly out the window when the problem is with rootvg! Such was the case with a system I was asked to fix recently.
The client was experiencing problems booting their systems from SAN served volumes that were managed via a third-party storage vendor’s multipath driver. Because they were not using PowerVM to virtualize their LPAR, we resorted to installing physical SAS drives, moved rootvg from the SAN volumes, and mirrored across the two new drives. This, of course, was an easy task to accomplish because of the power of LVM. I cannot explain what happened, but when we imported the SAN volumes that contained the non-rootvg volume groups, somehow the rootvg PVIDs were clobbered, and they conflicted with two volumes from an application VG. In short, the PVID written to the drive and the ODM did not match the entries in the VGDA. This problem manifested itself in the form of total loss of the bootlist references, and rootvg disappeared from all the normal LVM command output!
To remove the conflict, the application VG was exported, the PVIDs were cleared from those volumes, and the application VG was imported via the recreatevg command, which generated new PVIDs for the PVs in that application volume group. Unfortunately, that did not resolve the problem, again because of the VGDA. The two hdisks that physically contained rootvg (hdisk3 and hdisk4) could be referenced via the bogus PVID listed in the ODM, so I used those to unmirror rootvg. The goal was to free up one drive (hdisk4), clear the PVID, add it back to rootvg and migrate off the bogus PV. The extendvg command did not work, even with a force flag, so I removed the hdisk from the device list. Because of the mismatches, reconfiguration of the devices via cfgmgr would not complete. To make matters worse, the block and character devices within /dev no longer existed for either hdisk. Fortunately, this client had Tivoli Storage Manager and had been backing up the system regularly. As such, I simply restored the block and device files back to their proper place in /dev. Had we not had that option, we could have simply restored those files from the mksysb backup we had generated before we began our work initially.
I was still struggling with the PVID/VGDA mismatch, and it was clear there were only two ways around the problem: 1. find a way to force write a custom PVID to the physical drive, or 2. fall back to our mksysb, and hope for the best. There is no command that allows one to custom write a PVID, but I found a document from the IBM developer’s website that described a method using dd to force it. Since the only other remediation we had was a rootvg restore, we decided to try this very risky procedure. Again, you should exhaust all other options before resorting to this method.
IBM web page reference:
Happily, this procedure worked. Amazingly, this system continued to run during this entire episode. We did take down the Universe application while we worked, but in retrospect, we did not have to. Once we had the PVID in sync with the VGDA and the block and character device files back in place, all commands, including cfgmgr ran without error. We mirrored rootvg, set the bootlist, issued a savebase command, and rebooted the system successfully.
1. First and foremost, as weird and stressful as this was, we knew we could always fall back to our mksysb, so we never reached a panic stage. Maintaining a good, up-to-date NIM server/LPAR and automating your mksysb backups is crucial for any AIX environment.
2. A good file level backup solution, such as TSM (now Spectrum Protect), will save your bacon every time.
3. AIX is even more resilient than I thought. What I didn’t mention above was there was a lag between the initial rootvg migration and the discovery of the problem. I was out of town when we discovered the problem, and it was a week before we could address it. Neither AIX nor the application experienced any outages during the lag nor the troubleshooting/fix phase! Truly amazing.
4. Any time you mess with the ODM, save copies of the original state. I deleted some ODM entries during my troubleshooting efforts and several times I needed to restore the original state of the ODM to “try something else.”
5. Lastly, this was, obviously, not a primary production system. Had we faced this problem on the client’s primary production system, I would not have wasted time messing with the PVIDs as I did. Often, technologists want to figure out the problem and solve it, no matter how long it takes. I call this “a desire to slay the dragon,” and most of us propeller heads refuse to let a dragon beat us. When it comes to primary production systems, it is often easier and faster to walk around the dragon than face him directly. Restoring the mksysb would have been the faster resolution to our dilemma. Since we had the flexibility and time, the client wanted me to try and fix it without resorting to the restore. Always keep an eye out for the path around the dragon!
If you have questions or comments regarding the above situation, please contact me directly at email@example.com.