Jump to content

Help! Newly partitionned hard drive gone :(


14 posts in this topic

Recommended Posts

I partitionned / formatted my new 1TB caviar black sata hard drive in OSX which I intended to use mostly for storing data. I did some file transfer, reboot in OSX and system says it does not reconized this disk, as me for format it?!

 

In disk util the drive shows as a 2TB drive, all options are greyed. No sign of any partition.

 

Did I just lost 700+ Gb of stuff? OMG tell me that there's something I can do.

Link to comment
Share on other sites

I partitionned / formatted my new 1TB caviar black sata hard drive in OSX which I intended to use mostly for storing data. I did some file transfer, reboot in OSX and system says it does not reconized this disk, as me for format it?!

 

In disk util the drive shows as a 2TB drive, all options are greyed. No sign of any partition.

 

Did I just lost 700+ Gb of stuff? OMG tell me that there's something I can do.

 

 

 

RMA ?

Link to comment
Share on other sites

As cyberderf suggests, it's possible that your drive is defective; however, it's also possible that something else is going on and that recovery will be possible. More information is needed to come to a conclusion. Could you post the output of "sudo diskutil list" in a Terminal? That will give more precise information than the GUI Disk Utility provides. You might also try my GPT fdisk; post the output of "sudo gdisk -l /dev/disk1" (changing /dev/disk1 to whatever your disk device is, if necessary).

Link to comment
Share on other sites

/dev/disk0

#: TYPE NAME SIZE IDENTIFIER

0: FDisk_partition_scheme *233.8 Gi disk0

1: Apple_HFS MAC_OS 24.4 Gi disk0s1

2: DOS_FAT_32 STUFF 191.9 Gi disk0s5

/dev/disk1

#: TYPE NAME SIZE IDENTIFIER

0: FDisk_partition_scheme *465.8 Gi disk1

1: Windows_NTFS 24.4 Gi disk1s1

2: Windows_NTFS BIGFOOT 441.3 Gi disk1s5

/dev/disk2

#: TYPE NAME SIZE IDENTIFIER

0: *0.0 B disk2

/dev/disk4

#: TYPE NAME SIZE IDENTIFIER

0: FDisk_partition_scheme *152.7 Gi disk4

1: Apple_HFS Backup 160 GO 152.7 Gi disk4s1

Link to comment
Share on other sites

From the highlighting and weirdness in the output, I'll assume that the problem disk is /dev/disk2. Unfortunately, this looks bad. It seems to be claiming that the disk is 0 bytes in size, but I don't know how diskutil determines a disk's size, so I don't know if that means that the drive's hardware is bad or if the program might report a 0-size disk because of a corrupt partition table or for some other reason.

 

Since I wrote GPT fdisk, I'm more familiar with what it reports. It's also got a verify option ('v' on any menu) that may provide some useful diagnostics and several advanced recovery features. It's conceivable it will be able to recover the disk, but I don't want to make any promises about that. I suggest you download it, type "sudo gdisk /dev/disk2", type "p" at its main menu, type "v" at its main menu, type "q" to quit, and post the output. That'll give me a much better idea of what might be causing the problem.

 

Another possibly useful diagnostic step would be to see what Windows' disk partitioning tools make of the disk. If they report it as being 0-length, then that favors the hypothesis that the hardware is bad.

 

Also, is this an internal or an external drive? Especially if it's an external drive, try removing every unnecessary device of the same type -- for instance, if it's a USB device, unplug every USB device except your keyboard and mouse (if they're USB devices). If you're using a hub, try connecting the disk directly to the computer. If it's an internal drive, or an external drive with a cable that's not permanently attached, try a different cable.

Link to comment
Share on other sites

Here's the result for GPTDISK

 

Disk /dev/disk2: 0 sectors, 0 bytes

Logical sector size: 512 bytes

Disk identifier (GUID): 401DC960-EA9B-4672-869F-DF927B569951

Partition table holds up to 128 entries

First usable sector is 34, last usable sector is 18446744073709551582

Partitions will be aligned on 2048-sector boundaries

Total free space is 18446744073709551549 sectors (16384.0 PiB)

 

When I did a disk verification, it finds 127 partitions too big for the disk. According to my near zero knowledge on the matter, it smells RMA a lot.

Link to comment
Share on other sites

Yeah, the disk is definitely fubared. GPT fdisk is reporting the size as 0 bytes, and GPT fdisk determines the disk's size by a low-level system call, so that can't possibly be good. A few last things to try before you return it:

 

  • Try a new data cable, if possible. (Buy a new one, if necessary.)
  • Shut down and reboot the computer, if you haven't done so already.
  • As I suggested last time, unplug all unnecessary devices of the same type.
  • If the disk is an external model, it's possible that the USB or FireWire interface is the problem. If so, you might be able to get your data back by opening the enclosure, removing the disk, and attaching it via an SATA cable. (Most external USB and FireWire drives actually use SATA interfaces between the disk and the USB/FireWire circuitry.) This will almost certainly void your warranty, though, and I hear that some recent drives don't use SATA interfaces. You'll have to decide whether your files are worth the risk.

Link to comment
Share on other sites

It's occurred to me that there may be a way to recover your data, even if you're forced to return the drive; however, you'll need enough disk space to hold the partition(s) you're recovering:

 

  1. Install and partition a new drive, if necessary.
  2. Type "sudo gdisk -l /dev/disk2" to obtain a listing of the partitions. Note the partition number(s) and size(s) of the partition(s) you want to recover.
  3. Type "sudo dd if=/dev/disk2p3 of=/path/to/free/space/image.img", where /dev/disk2p3 is the device file for the partition you want to recover and /path/to/free/space is the path to a directory with enough free space to hold an image of the entire partition.

 

You should now be able to mount /path/to/free/space/image.img as a disk image to access its files. Alternatively, if you create a partition that's precisely the right size, you can use its device identifier instead of /path/to/free/space/image.img in the last step to copy the data directly from the original disk to the new partition.

 

Another option, which would require a replacement disk that's at least as large as the original, is to do a raw copy of the whole disk:

 

sudo dd if=/dev/disk2 of=/dev/disk3

 

This will copy everything from /dev/disk2 to /dev/disk3, on a byte-for-byte basis. It will take several hours to complete. Everything on /dev/disk3 will be lost, so do this only on a new /dev/disk3 or a /dev/disk3 that holds no data you care about.

 

I can't guarantee that either of these procedures will work; it's conceivable that access to the original disk is messed up enough that access beyond the first few sectors will be impossible. I think one or both might work, though, because you seem to be getting at least some data off the drive, despite the fact that it's reporting a clearly bogus size to the OS.

Link to comment
Share on other sites

Thanks all for your help, I had returned the drive to WD for an exchange.

 

Now.. Caviar black are performance hard drives, often mean running hot. This was my first hard drive fail in 15 years. I did about 10 hours of file transfer in OSX before the drive failed. Is it possible that the problem is related to the fact my usage was intensive before the fail?

 

Should I still be confident in using the product ins OSX86 since WD will ship me the same drive?

Link to comment
Share on other sites

Thanks all for your help, I had returned the drive to WD for an exchange.

 

Now.. Caviar black are performance hard drives. This was my first hard drive fail in 15 years. Is it possible that the problem is related to the fact my usage was intensive before the fail. I did about 10 hours of file transfer in OSX before the drive failed ?

 

Should I still be confident in using the product ins OSX86 since WD will ship me the same drive?

 

Most products fail either very early or very late. Early failures are due to manufacturing defects that escaped factory quality control, and late failures are due to age (entropy catching up with things). Chances are you just got one of those early-life failures because of a random manufacturing defect. They do happen. I suppose there's an off chance that there's a design defect in this model that causes it to fail more frequently than other drives, or that causes it to fail under certain patterns of use (such as your heavy initial use or even patterns unique to HFS+).

Link to comment
Share on other sites

HFS+ and heavy load as a cause of fail.

 

Is that kind of stuff reported by some users, or maybe documented ?

 

Just speculation on my part, although heavy use is certainly a contributing factor to premature failure of many tools. I wouldn't expect it in a hard disk, though; and if it did happen, it would be a manufacturing or design defect -- as I stated at the start of the sentence in which I mentioned the possibility.

Link to comment
Share on other sites

 Share

×
×
  • Create New...