Guest Join UsHey Guest,
Welcome, Join Dolphin awesome community where you can discuss on various topics :-
You need to be Dolphin user to get one VIP forum account.
You can get latest software upgrade on DFL data recovery tools.
You can get latest data recovery manuals, case studies and tips.
You can seek professional data recovery support from worldwide engineers.
You can get firmware resources for all HDD brands.
You can get donor hdd support from all forum members.
Much More.. or Create an Account


New Data Recovery Adapters, HDD Head Replacement Tools, Unlock PCBs here
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Open LVM and also recover LVM (Linux)
#1
Greetings friends, I wonder what you use to open LVM, and also in cases of deleted LVM out, what more utlizam to recover the same as here in Brazil are 90% of Linux servers.
muito obrigado
Reply
#2
R-Studio is able to work with Linux systems.
Have you tried it?
Reply
#3
(02-13-2015, 12:38 AM)pclab Wrote: R-Studio is able to work with Linux systems.
Have you tried it?

Yes friend already tried rstudio, but it still fails, I'm currently using UFS and Linux reader, I will see if I can successfully using these 2 programs.

These I'm trying to recover is a server in Raid 1
Reply
#4
CnW Recovery may work.
___
Reply
#5
(02-13-2015, 02:40 AM)LarrySabo Wrote: CnW Recovery may work.

Thank you friend I will try this you are indicating.
Reply
#6
(02-13-2015, 02:16 AM)Everton Parente Wrote:
(02-13-2015, 12:38 AM)pclab Wrote: R-Studio is able to work with Linux systems.
Have you tried it?

Yes friend already tried rstudio, but it still fails, I'm currently using UFS and Linux reader, I will see if I can successfully using these 2 programs.

These I'm trying to recover is a server in Raid 1
Hi,

The two hard drives had problems? (mirror)
Reply
#7
(02-13-2015, 03:59 AM)perenne Wrote:
(02-13-2015, 02:16 AM)Everton Parente Wrote:
(02-13-2015, 12:38 AM)pclab Wrote: R-Studio is able to work with Linux systems.
Have you tried it?

Yes friend already tried rstudio, but it still fails, I'm currently using UFS and Linux reader, I will see if I can successfully using these 2 programs.

These I'm trying to recover is a server in Raid 1
Hi,

The two hard drives had problems? (mirror)


yes 2 (logical defect) was put defect in XenServer structure (Linux), and I'm not able to access the LVM.

What did recommend me friend?
Reply
#8
(02-13-2015, 08:59 AM)Everton Parente Wrote:
(02-13-2015, 03:59 AM)perenne Wrote:
(02-13-2015, 02:16 AM)Everton Parente Wrote:
(02-13-2015, 12:38 AM)pclab Wrote: R-Studio is able to work with Linux systems.
Have you tried it?

Yes friend already tried rstudio, but it still fails, I'm currently using UFS and Linux reader, I will see if I can successfully using these 2 programs.

These I'm trying to recover is a server in Raid 1
Hi,

The two hard drives had problems? (mirror)


yes 2 (logical defect) was put defect in XenServer structure (Linux), and I'm not able to access the LVM.

What did recommend me friend?

Hi Everton,

ReclaiMe File Recovery.
Reply
#9
Friends unfortunately we could not be successful in recovery, I have good friends here in Brazil working with data recovery, however no already succeeded with XenServer, another question is when do the image in dfl, she appears in winhar all perfect esquita apparently containing the perfect information, please suggest will develop recovery in LVM (Linux).
Reply
#10
(02-15-2015, 09:34 AM)Everton Parente Wrote: Friends unfortunately we could not be successful in recovery, I have good friends here in Brazil working with data recovery, however no already succeeded with XenServer, another question is when do the image in dfl, she appears in winhar all perfect esquita apparently containing the perfect information, please suggest will develop recovery in LVM (Linux).


correction: appears in Winhex.
Reply
#11
This process which use these cases always the right.

Logical Volume Management (LVM) provides a high level, flexible view of a server's disk storage. Though robust, problems can occur. The purpose of this document is to review the recovery process when a disk is missing or damaged, and then apply that process to plausible examples. When a disk is accidentally removed or damaged in some way that adversely affects the logical volume, the general recovery process is:1.Replace the failed or missing disk
2.Restore the missing disk's UUID
3.Restore the LVM meta data
4.Repair the file system on the LVM device

The recovery process will be demonstrated in three specific cases:
1.A disk belonging to a logical volume group is removed from the server
2.The LVM meta data is damaged or corrupted
3.One disk in a multi-disk volume group has been permanently removed

This article discusses how to restore the LVM meta data. This is a risky proposition. If you restore invalid information, you can loose all the data on the LVM device. An important part of LVM recovery is having backups of the meta data to begin with, and knowing how it's supposed to look when everything is running smoothly. LVM keeps backup and archive copies of it's meta data in /etc/lvm/backup and /etc/lvm/archive. Backup these directories regularly, and be familiar with their contents. You should also manually backup the LVM meta data with vgcfgbackup before starting any maintenance projects on your LVM volumes.

If you are planning on removing a disk from the server that belongs to a volume group, you should refer to the LVM HOWTO before doing so.

Server Configuration

In all three examples, a server with SUSE Linux Enterprise Server 10 with Service Pack 1 (SLES10 SP1) will be used with LVM version 2. The examples will use a volume group called "sales" with a linear logical volume called "reports". The logical volume and it's mount point are shown below. You will need to substitute your mount points and volume names as needed to match your specific environment.
ls-lvm:~ # cat /proc/partitions
major minor #blocks name

8 0 4194304 sda
8 1 514048 sda1
8 2 1052257 sda2
8 3 1 sda3
8 5 248976 sda5
8 16 524288 sdb
8 32 524288 sdc
8 48 524288 sdd

ls-lvm:~ # pvcreate /dev/sda5 /dev/sd[b-d]
Physical volume "/dev/sda5" successfully created
Physical volume "/dev/sdb" successfully created
Physical volume "/dev/sdc" successfully created
Physical volume "/dev/sdd" successfully created

ls-lvm:~ # vgcreate sales /dev/sda5 /dev/sd[b-d]
Volume group "sales" successfully created

ls-lvm:~ # lvcreate -n reports -L +1G sales
Logical volume "reports" created

ls-lvm:~ # pvscan
PV /dev/sda5 VG sales lvm2 [240.00 MB / 240.00 MB free]
PV /dev/sdb VG sales lvm2 [508.00 MB / 0 free]
PV /dev/sdc VG sales lvm2 [508.00 MB / 0 free]
PV /dev/sdd VG sales lvm2 [508.00 MB / 500.00 MB free]
Total: 4 [1.72 GB] / in use: 4 [1.72 GB] / in no VG: 0 [0 ]

ls-lvm:~ # vgs
VG #PV #LV #SN Attr VSize VFree
sales 4 1 0 wz--n- 1.72G 740.00M

ls-lvm:~ # lvs
LV VG Attr LSize Origin Snap% Move Log Copy%
reports sales -wi-ao 1.00G

ls-lvm:~ # mount | grep sales
/dev/mapper/sales-reports on /sales/reports type ext3 (rw)

ls-lvm:~ # df -h /sales/reports
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/sales-reports
1008M 33M 925M 4% /sales/reports


Disk Belonging to a Volume Group Removed

Removing a disk, belonging to a logical volume group, from the server may sound a bit strange, but with Storage Area Networks (SAN) or fast paced schedules, it happens.

Symptom:

The first thing you may notice when the server boots are messages like:
"Couldn't find all physical volumes for volume group sales."
"Couldn't find device with uuid '56pgEk-0zLS-cKBc-z9vJ-kP65-DUBI-hwZPSu'."
'Volume group "sales" not found'






1.Type root's password.
2.Edit the /etc/fstab file.
3.Comment out the line with /dev/sales/report
4.Reboot

The LVM symptom is a missing sales volume group. Typing cat /proc/partitions confirms the server is missing one of it's disks.
ls-lvm:~ # cat /proc/partitions
major minor #blocks name

8 0 4194304 sda
8 1 514048 sda1
8 2 1052257 sda2
8 3 1 sda3
8 5 248976 sda5
8 16 524288 sdb
8 32 524288 sdc

ls-lvm:~ # pvscan
Couldn't find device with uuid '56ogEk-OzLS-cKBc-z9vJ-kP65-DUBI-hwZPSu'.
Couldn't find device with uuid '56ogEk-OzLS-cKBc-z9vJ-kP65-DUBI-hwZPSu'.
Couldn't find device with uuid '56ogEk-OzLS-cKBc-z9vJ-kP65-DUBI-hwZPSu'.
Couldn't find device with uuid '56ogEk-OzLS-cKBc-z9vJ-kP65-DUBI-hwZPSu'.
Couldn't find device with uuid '56ogEk-OzLS-cKBc-z9vJ-kP65-DUBI-hwZPSu'.
Couldn't find device with uuid '56ogEk-OzLS-cKBc-z9vJ-kP65-DUBI-hwZPSu'.
PV /dev/sda5 VG sales lvm2 [240.00 MB / 240.00 MB free]
PV /dev/sdb VG sales lvm2 [508.00 MB / 0 free]
PV unknown device VG sales lvm2 [508.00 MB / 0 free]
PV /dev/sdc VG sales lvm2 [508.00 MB / 500.00 MB free]
Total: 4 [1.72 GB] / in use: 4 [1.72 GB] / in no VG: 0 [0 ]


Solution:
1.Fortunately, the meta data and file system on the disk that was /dev/sdc are intact.
2.So the recovery is to just put the disk back.
3.Reboot the server.
4.The /etc/init.d/boot.lvm start script will scan and activate the volume group at boot time.
5.Don't forget to uncomment the /dev/sales/reports device in the /etc/fstab file.



If this procedure does not work, then you may have corrupt LVM meta data.

Corrupted LVM Meta Data

The LVM meta data does not get corrupted very often; but when it does, the file system on the LVM logical volume should also be considered unstable. The goal is to recover the LVM volume, and then check file system integrity.

Symptom 1:

Attempting to activate the volume group gives the following:
ls-lvm:~ # vgchange -ay sales
/dev/sdc: Checksum error
/dev/sdc: Checksum error
/dev/sdc: Checksum error
/dev/sdc: Checksum error
/dev/sdc: Checksum error
/dev/sdc: Checksum error
/dev/sdc: Checksum error
/dev/sdc: Checksum error
/dev/sdc: Checksum error
/dev/sdc: Checksum error
/dev/sdc: Checksum error
/dev/sdc: Checksum error
/dev/sdc: Checksum error
Couldn't read volume group metadata.
Volume group sales metadata is inconsistent
Volume group for uuid not found: m4Cg2vkBVSGe1qSMNDf63v3fDHqN4uEkmWoTq5TpHpRQwmnAGD18r44OshLdHj05
0 logical volume(s) in volume group "sales" now active


This symptom is the result of a minor change in the meta data. In fact, only three bytes were overwritten. Since only a portion of the meta data was damaged, LVM can compare it's internal check sum against the meta data on the device and know it's wrong. There is enough meta data for LVM to know that the "sales" volume group and devices exit, but are unreadable.
ls-lvm:~ # pvscan
/dev/sdc: Checksum error
/dev/sdc: Checksum error
/dev/sdc: Checksum error
/dev/sdc: Checksum error
/dev/sdc: Checksum error
/dev/sdc: Checksum error
/dev/sdc: Checksum error
/dev/sdc: Checksum error
PV /dev/sda5 VG sales lvm2 [240.00 MB / 240.00 MB free]
PV /dev/sdb VG sales lvm2 [508.00 MB / 0 free]
PV /dev/sdc VG sales lvm2 [508.00 MB / 0 free]
PV /dev/sdd VG sales lvm2 [508.00 MB / 500.00 MB free]
Total: 4 [1.72 GB] / in use: 4 [1.72 GB] / in no VG: 0 [0 ]


Notice pvscan shows all devices present and associated with the sales volume group. It's not the device UUID that is not found, but the volume group UUID.

Solution 1:
1.Since the disk was never removed, leave it as is.
2.There were no device UUID errors, so don't attempt to restore the UUIDs.
3.This is a good candidate to just try restoring the LVM meta data.

ls-lvm:~ # vgcfgrestore sales
/dev/sdc: Checksum error
/dev/sdc: Checksum error
Restored volume group sales

ls-lvm:~ # vgchange -ay sales
1 logical volume(s) in volume group "sales" now active

ls-lvm:~ # pvscan
PV /dev/sda5 VG sales lvm2 [240.00 MB / 240.00 MB free]
PV /dev/sdb VG sales lvm2 [508.00 MB / 0 free]
PV /dev/sdc VG sales lvm2 [508.00 MB / 0 free]
PV /dev/sdd VG sales lvm2 [508.00 MB / 500.00 MB free]
Total: 4 [1.72 GB] / in use: 4 [1.72 GB] / in no VG: 0 [0 ]


4.Run a file system check on /dev/sales/reports.ls-lvm:~ # e2fsck /dev/sales/reports
e2fsck 1.38 (30-Jun-2005)
/dev/sales/reports: clean, 961/131072 files, 257431/262144 blocks

ls-lvm:~ # mount /dev/sales/reports /sales/reports/

ls-lvm:~ # df -h /sales/reports/
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/sales-reports
1008M 990M 0 100% /sales/reports



Symptom 2:

Minor damage to the LVM meta data is easily fixed with vgcfgrestore. If the meta data is gone, or severely damaged, then LVM will consider that disk as an "unknown device." If the volume group contains only one disk, then the volume group and it's logical volumes will simply be gone. In this case the symptom is the same as if the disk was accidentally removed, with the exception of the device name. Since /dev/sdc was not actually removed from the server, the devices are still labeled a through d.
ls-lvm:~ # pvscan
Couldn't find device with uuid '56ogEk-OzLS-cKBc-z9vJ-kP65-DUBI-hwZPSu'.
Couldn't find device with uuid '56ogEk-OzLS-cKBc-z9vJ-kP65-DUBI-hwZPSu'.
Couldn't find device with uuid '56ogEk-OzLS-cKBc-z9vJ-kP65-DUBI-hwZPSu'.
Couldn't find device with uuid '56ogEk-OzLS-cKBc-z9vJ-kP65-DUBI-hwZPSu'.
Couldn't find device with uuid '56ogEk-OzLS-cKBc-z9vJ-kP65-DUBI-hwZPSu'.
Couldn't find device with uuid '56ogEk-OzLS-cKBc-z9vJ-kP65-DUBI-hwZPSu'.
PV /dev/sda5 VG sales lvm2 [240.00 MB / 240.00 MB free]
PV /dev/sdb VG sales lvm2 [508.00 MB / 0 free]
PV unknown device VG sales lvm2 [508.00 MB / 0 free]
PV /dev/sdd VG sales lvm2 [508.00 MB / 500.00 MB free]
Total: 4 [1.72 GB] / in use: 4 [1.72 GB] / in no VG: 0 [0 ]


Solution 2:
1.First, replace the disk. Most likely the disk is already there, just damaged.
2.Since the UUID on /dev/sdc is not there, a vgcfgrestore will not work. ls-lvm:~ # vgcfgrestore sales
Couldn't find device with uuid '56ogEk-OzLS-cKBc-z9vJ-kP65-DUBI-hwZPSu'.
Couldn't find all physical volumes for volume group sales.
Restore failed.


3.Comparing the output of cat /proc/partitions and pvscan shows the missing device is /dev/sdc, and pvscan shows which UUID it needs for that device. So, copy and paste the UUID that pvscan shows for /dev/sdc.ls-lvm:~ # pvcreate --uuid 56ogEk-OzLS-cKBc-z9vJ-kP65-DUBI-hwZPSu /dev/sdc
Physical volume "/dev/sdc" successfully created


4.Restore the LVM meta datals-lvm:~ # vgcfgrestore sales
Restored volume group sales

ls-lvm:~ # vgscan
Reading all physical volumes. This may take a while...
Found volume group "sales" using metadata type lvm2

ls-lvm:~ # vgchange -ay sales
1 logical volume(s) in volume group "sales" now active


5.Run a file system check on /dev/sales/reports.ls-lvm:~ # e2fsck /dev/sales/reports
e2fsck 1.38 (30-Jun-2005)
/dev/sales/reports: clean, 961/131072 files, 257431/262144 blocks

ls-lvm:~ # mount /dev/sales/reports /sales/reports/

ls-lvm:~ # df -h /sales/reports
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/sales-reports
1008M 990M 0 100% /sales/reports
If you are automatically mounting /dev/sales/reports, then the server will fail to boot and prompt you to login as root to fix the problem.

Disk Permanently Removed

This is the most severe case. Obviously if the disk is gone and unrecoverable, the data on that disk is likewise unrecoverable. This is a great time to feel good knowing you have a solid backup to rely on. However, if the good feelings are gone, and there is no backup, how do you recover as much data as possible from the remaining disks in the volume group? No attempt will be made to address the data on the unrecoverable disk; this topic will be left to the data recovery experts.

Symptom:

The symptom will be the same as Symptom 2 in the Corrupted LVM Meta Data section above. You will see errors about an "unknown device" and missing device with UUID.

Solution:
1.Add a replacement disk to the server. Make sure the disk is empty.
2.Create the LVM meta data on the new disk using the old disk's UUID that pvscan displays.ls-lvm:~ # pvcreate --uuid 56ogEk-OzLS-cKBc-z9vJ-kP65-DUBI-hwZPSu /dev/sdc
Physical volume "/dev/sdc" successfully created


3.Restore the backup copy of the LVM meta data for the sales volume group.ls-lvm:~ # vgcfgrestore sales
Restored volume group sales

ls-lvm:~ # vgscan
Reading all physical volumes. This may take a while...
Found volume group "sales" using metadata type lvm2

ls-lvm:~ # vgchange -ay sales
1 logical volume(s) in volume group "sales" now active


4.Run a file system check to rebuild the file system.ls-lvm:~ # e2fsck -y /dev/sales/reports
e2fsck 1.38 (30-Jun-2005)
--snip--
Free inodes count wrong for group #5 (16258, counted=16384).
Fix? yes

Free inodes count wrong (130111, counted=130237).
Fix? yes

/dev/sales/reports: ***** FILE SYSTEM WAS MODIFIED *****
/dev/sales/reports: 835/131072 files (5.7% non-contiguous), 137213/262144 blocks


5.Mount the file system and recover as much data as possible.
6.NOTE: If the missing disk contains the beginning of the file system, then the file system's superblock will be missing. You will need to rebuild or use an alternate superblock. Restoring a file system superblock is outside the scope of this article, please refer to your file system's documentation.

Conclusion

LVM by default keeps backup copies of it's meta data for all LVM devices. These backup files are stored in /etc/lvm/backup and /etc/lvm/archive. If a disk is removed or the meta data gets damaged in some way, it can be easily restored, if you have backups of the meta data. This is why it is highly recommended to never turn off LVM's auto backup feature. Even if a disk is permanently removed from the volume group, it can be reconstructed, and often times the remaining data on the file system recovered
PCDIAG DATA RECOVERY BRASIL
OFFICIAL PARTNER DOLPHIN DATA LAB
CITY: FORTALEZA
SKYPE: pcdiag
www.pcdiag.org
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)

About Dolphin Data Lab Forums

Dolphin support forum is for Dolphin users only to get latest software upgrade, new user manuals, case studies, data recovery solutions and tips on how to use Dolphin data recovery tools properly and for maximum data recovery.

For any more information, please contact us.

              Quick Links

              User Links