Enterprise Data Services Community Blog
3 MIN READ
Having fun with /Storage migration to new array
Abesama
16 years agoLevel 6
Recently I had all 3 of my PureDisk environments giving me storage problems due to internal/external causes.
First one was relatively simple, it's a 6.2.2 all-in-one node and it ran out of disk space for /Storage - 95% full.
The storage array was not a SAN but a dedicated one.
Challenging task with this was finding out how to order physical disks and finding a resource to put the disks in the array.
Array was a 12-slots one, with 6 slots occupied with 300GB disks.
Once the new 6 disks were put in, the engineer rebooted PureDisk server with a HP CD loaded, to connect to the array and create a single 1.5TB RAID5 virtual LUN, then the server's been rebooted again for PDOS to start.
When I started Yast2 interface, it automatically picked up the new LUN, and I simply added the new LUN in the LVM volume group - this task instantly grew the xfs filesystem as well, and suddenly I had 49% full /Storage filesystem, happy.
Second one was not that easy.
3 node 6.2.1 SPA-CTR-CR and I had to migrate the storage to new array.
Unfortunately I could not handle this task all by myself, I had to get my UNIX support team to do it - they did an excellent job using pvmove to relocate some of the LUNs to new array, while some LUNs are untouched as they were already on new array.
The last one is SPA-CR-CR 6.5.1, both CRs ran out of space, /Storage/data being 90% full.
VxVM was used for this, so I placed a call to Symantec Tech Support.
When the customer support asked me which product I'm logging a call for, I told him it's about VxVM :-)
And then he asked me what version - uh ... don't know, probably not the latest. (It did not occur to me to simply run rpm and grep for VM, I was stuck at the idea of cat'ing a file under /etc/vx somewhere)
When my Emergency-Down call was picked up by Symantec Tech Support, I let him WebEx in, and he had me run these commands to add the LUNs in disk group and resize the volume.
1. yast2 (to probe/scan the newly attached LUNs)
2. vxdisk -o alldgs list | grep error (only the disks that are not initialized by VxVM yet)
3. vxdisksetup -i sda format=sliced (initialize them)
4. vxdg -g DGNAME free (just to check how much free space we have in the DG)
5. vxdg -g DGNAME adddisk sda=sda (add the initialized disks in to the DG)
6. vxresize -F vxfs -g DGNAME volNAME +2000g (resize the volume)
Yes, all these I learned long time ago in VERITAS Foundation Suite training, but VxVM/VxFS have not been my area ever since - all I learned escaped from my brain just weeks after the trainining session.
Now I have first and third PD environments running witih no problems.
Second one, I'll have to expand filesystem, and I will need to struggle a bit for it, but shouldn't be too tough.
Next big challenge will be upgrade to 6.5... and it's coming closer every day ...!!!
:-)
Abe
First one was relatively simple, it's a 6.2.2 all-in-one node and it ran out of disk space for /Storage - 95% full.
The storage array was not a SAN but a dedicated one.
Challenging task with this was finding out how to order physical disks and finding a resource to put the disks in the array.
Array was a 12-slots one, with 6 slots occupied with 300GB disks.
Once the new 6 disks were put in, the engineer rebooted PureDisk server with a HP CD loaded, to connect to the array and create a single 1.5TB RAID5 virtual LUN, then the server's been rebooted again for PDOS to start.
When I started Yast2 interface, it automatically picked up the new LUN, and I simply added the new LUN in the LVM volume group - this task instantly grew the xfs filesystem as well, and suddenly I had 49% full /Storage filesystem, happy.
Second one was not that easy.
3 node 6.2.1 SPA-CTR-CR and I had to migrate the storage to new array.
Unfortunately I could not handle this task all by myself, I had to get my UNIX support team to do it - they did an excellent job using pvmove to relocate some of the LUNs to new array, while some LUNs are untouched as they were already on new array.
The last one is SPA-CR-CR 6.5.1, both CRs ran out of space, /Storage/data being 90% full.
VxVM was used for this, so I placed a call to Symantec Tech Support.
When the customer support asked me which product I'm logging a call for, I told him it's about VxVM :-)
And then he asked me what version - uh ... don't know, probably not the latest. (It did not occur to me to simply run rpm and grep for VM, I was stuck at the idea of cat'ing a file under /etc/vx somewhere)
When my Emergency-Down call was picked up by Symantec Tech Support, I let him WebEx in, and he had me run these commands to add the LUNs in disk group and resize the volume.
1. yast2 (to probe/scan the newly attached LUNs)
2. vxdisk -o alldgs list | grep error (only the disks that are not initialized by VxVM yet)
3. vxdisksetup -i sda format=sliced (initialize them)
4. vxdg -g DGNAME free (just to check how much free space we have in the DG)
5. vxdg -g DGNAME adddisk sda=sda (add the initialized disks in to the DG)
6. vxresize -F vxfs -g DGNAME volNAME +2000g (resize the volume)
Yes, all these I learned long time ago in VERITAS Foundation Suite training, but VxVM/VxFS have not been my area ever since - all I learned escaped from my brain just weeks after the trainining session.
Now I have first and third PD environments running witih no problems.
Second one, I'll have to expand filesystem, and I will need to struggle a bit for it, but shouldn't be too tough.
Next big challenge will be upgrade to 6.5... and it's coming closer every day ...!!!
:-)
Abe
Published 16 years ago
Version 1.0Abesama
Level 6
Joined September 05, 2008
Enterprise Data Services Community Blog
Enterprise Data Services Community Blog is the perfect place to share short, timely insights including product tips, news and other information relevant to the community.