![]() Scsi-35000c500cacdbdd3 DEGRADED 0 0 12 too many errors (resilvering) Scsi-35000c500cacd2c77 DEGRADED 0 0 4 too many errors Scsi-35000c500cacdc907 DEGRADED 0 0 0 too many errors (resilvering) Scsi-35000c500cacdb54f FAULTED 0 18 0 too many errors (resilvering) Scsi-35000c500cacd291b FAULTED 0 11 31 too many errors (resilvering) Scsi-35000c500cacdf757 FAULTED 0 10 48 too many errors (resilvering) Scsi-35000c500cacd1c9b DEGRADED 0 0 0 too many errors ![]() Scsi-35000c500cab51563 DEGRADED 0 0 1 too many errors (resilvering) The pool willĬontinue to function, possibly in a degraded state.Īction: Wait for the resilver to complete. Status: One or more devices is currently being resilvered. In 4 raids a large number of disks had changed status to UNAVAILABLE or FAILED and all 4 spares were put into use automatically. Then after the third many problems were noted with the ZFS Pool. The first run of the script took many days but eventually finished. Everything was looking good and I started copying the data from the old NAS to this one using a script that made use of incremental snapshots, zfs send, and zfs receive. The ZFS pool was created with a mirror of two U.2 SSD drives for the log, two more U.2 SSD drives for the cache, 4 HDD spares(2 per expander), and 12 RAID-Z2 raids of 7 drives each(6 raids per expander). The new hardware arrived, the server and two expanders, and was set up with Debian Buster and the ZFS available on the buster-backports repository. The decision was made to keep things as simple and as cheap as possible while not taking up any additional rack space in the end. It was decided to replace our aging primary NAS, consisting of three 48 drive SAS expanders of 4TB drives, with a similar system of 12TB drives while reusing some of the newer hardware, one expander and SAS card that was added on about a year ago.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |