NASLite Network Attached Storage

www.serverelements.com
Task-specific simplicity with low hardware requirements.
It is currently Mon Apr 29, 2024 12:41 pm

All times are UTC - 5 hours [ DST ]




Post new topic Reply to topic  [ 2 posts ] 
Author Message
PostPosted: Mon Jan 19, 2009 4:18 am 
Offline

Joined: Thu Aug 03, 2006 5:53 am
Posts: 8
System has been running fine for years. 4 x 200GB disks in a RAID 5 array. Recently got near to capacity (90%) so started culling lots of old files I didn't want. Then got a warning from itunes that my library file (located on root share of the array) was read only. Logging into web control panel on the server showed all looked ok, so telnetted in.

This is where it started going wrong... the "check" option came up with loads of errors immediately (blocks, inodes, etc...). The "check & repair" option wasn't available though.

Retrieved server from under the stairs where it lives and plugged monitor in for local access to it and noticed an error coming up on boot saying a logical drive had failed. Megaconf showed both disks on Channel 0 as "fail" and came up with "Error" when I tried to rebuild.

Doing a search on here it appeared that the best option was to force mount the disks and run the check and repair, which I did overnight. Saw thousands of errors, fixes and "clears" being done.

Came to it this morning and the system is up, disk mounted. Most of folder structure looks good, but when I start trying to access data (mp3, film, pictures) I'm getting many, many errors... there seems to be HUGE corruption and I'm now very worried... Am I screwed now because I ran the repair? I haven't written anything to the disk, but I assume the repair has destructively wiped out huge lumps of data from where it didn't think it should be, so a corrupt file allocation table or something caused data integrity issues, which has now killed large areas of my archives? What else could/should I have done?

Is there any way for me to find out how widespread the damage is or recover from this in any way (by removing and mounting the disks as slaves and running r-linux?) Or am I doomed because the destructive stuff has already been done by check & repair...? Once my disks failed to mount was the damage pretty much already done?

As a side note, before I did check & repair, I did mount the failed disks in windows and ran R-linux, which took an hour or so to check each and came up with a list of "file types" but with numbers instead of filenames...


Top
 Profile  
 
PostPosted: Mon Jan 19, 2009 10:07 am 
Offline

Joined: Tue Aug 10, 2004 1:50 pm
Posts: 604
Location: Texas, USA
When that happens, the first thing you need to do is look at the syslog for anything that shouldn't be there. Since you are running raid5 then you don't need to run the bad blocks check from check and repair because of the redundancy. I would have checked the raid using the raid bios first. If that checks out, then run memtest to make sure the pc is fine and then run check and repair. It is also possible that you have connector, heat or power issues.

The most important thing to remember here is the raid did not become read only for no reason. Before fixing you need to find what broke. Numbers for filenames is a bad thing. Don't mean to scare you but brace yourself.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 2 posts ] 

All times are UTC - 5 hours [ DST ]


Who is online

Users browsing this forum: No registered users and 166 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group