#jbod — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #jbod, aggregated by home.social.
-
SSD death ☠️💀
I am busy rebuilding the data from an SSD which died all of a sudden from one moment to the next without giving any warnings in any manner in any way.
I always monitor SMART output of SSD drives & mechanical spinners (HDD)
I've not seen any smart output indicating imminante dead
This drive has acted like your girlfriend when she's just not in a good mood and without explanation says nothing to you in the morning, for hours.
This hard crash, means that the S.M.A.R.T. monitoring hardware didn't have proper Communications with the integrated circuits on the SSD.
The drive was always powered many times a month, never left without power for more than a week or so, thus that has not been a contributing factor to the SSD catastrophic failure
The drive itself is fairly small.
The data on this dead SSD I've backed up on remote drives connected in JBOD format to machines which I have running remotely.
No ZFS on my backup machines
- I want to run ZFS native
- that means running a BSD OS on those machines
- that also means I will need to backup / restore all data on those drives meaning
- I will need 200% of the used HDD / SSD space on those machines
- I need patience for that backup
- I need expensive extra HDD's for that project
- I won't pay USD 300 for a USD 120 HDD
NO ZFS under these global SSD / HDD market prices
Sources:
- Moi
- man ls(1)
- man lsd(1)
- man cp(1)
- man smartctl(8)
- man zfs(8)
#HDD #SSD #crash #no #warning #on #TV #filesystems #remote #backup #network #JBOD #SMART #programming #bacula
-
SSD death ☠️💀
I am busy rebuilding the data from an SSD which died all of a sudden from one moment to the next without giving any warnings in any manner in any way.
I always monitor SMART output of SSD drives & mechanical spinners (HDD)
I've not seen any smart output indicating imminante dead
This drive has acted like your girlfriend when she's just not in a good mood and without explanation says nothing to you in the morning, for hours.
This hard crash, means that the S.M.A.R.T. monitoring hardware didn't have proper Communications with the integrated circuits on the SSD.
The drive was always powered many times a month, never left without power for more than a week or so, thus that has not been a contributing factor to the SSD catastrophic failure
The drive itself is fairly small.
The data on this dead SSD I've backed up on remote drives connected in JBOD format to machines which I have running remotely.
No ZFS on my backup machines
- I want to run ZFS native
- that means running a BSD OS on those machines
- that also means I will need to backup / restore all data on those drives meaning
- I will need 200% of the used HDD / SSD space on those machines
- I need patience for that backup
- I need expensive extra HDD's for that project
- I won't pay USD 300 for a USD 120 HDD
NO ZFS under these global SSD / HDD market prices
Sources:
- Moi
- man ls(1)
- man lsd(1)
- man cp(1)
- man smartctl(8)
- man zfs(8)
#HDD #SSD #crash #no #warning #on #TV #filesystems #remote #backup #network #JBOD #SMART #programming #bacula
-
SSD death ☠️💀
I am busy rebuilding the data from an SSD which died all of a sudden from one moment to the next without giving any warnings in any manner in any way.
I always monitor SMART output of SSD drives & mechanical spinners (HDD)
I've not seen any smart output indicating imminante dead
This drive has acted like your girlfriend when she's just not in a good mood and without explanation says nothing to you in the morning, for hours.
This hard crash, means that the S.M.A.R.T. monitoring hardware didn't have proper Communications with the integrated circuits on the SSD.
The drive was always powered many times a month, never left without power for more than a week or so, thus that has not been a contributing factor to the SSD catastrophic failure
The drive itself is fairly small.
The data on this dead SSD I've backed up on remote drives connected in JBOD format to machines which I have running remotely.
No ZFS on my backup machines
- I want to run ZFS native
- that means running a BSD OS on those machines
- that also means I will need to backup / restore all data on those drives meaning
- I will need 200% of the used HDD / SSD space on those machines
- I need patience for that backup
- I need expensive extra HDD's for that project
- I won't pay USD 300 for a USD 120 HDD
NO ZFS under these global SSD / HDD market prices
Sources:
- Moi
- man ls(1)
- man lsd(1)
- man cp(1)
- man smartctl(8)
- man zfs(8)
#HDD #SSD #crash #no #warning #on #TV #filesystems #remote #backup #network #JBOD #SMART #programming #bacula
-
SSD death ☠️💀
I am busy rebuilding the data from an SSD which died all of a sudden from one moment to the next without giving any warnings in any manner in any way.
I always monitor SMART output of SSD drives & mechanical spinners (HDD)
I've not seen any smart output indicating imminante dead
This drive has acted like your girlfriend when she's just not in a good mood and without explanation says nothing to you in the morning, for hours.
This hard crash, means that the S.M.A.R.T. monitoring hardware didn't have proper Communications with the integrated circuits on the SSD.
The drive was always powered many times a month, never left without power for more than a week or so, thus that has not been a contributing factor to the SSD catastrophic failure
The drive itself is fairly small.
The data on this dead SSD I've backed up on remote drives connected in JBOD format to machines which I have running remotely.
No ZFS on my backup machines
- I want to run ZFS native
- that means running a BSD OS on those machines
- that also means I will need to backup / restore all data on those drives meaning
- I will need 200% of the used HDD / SSD space on those machines
- I need patience for that backup
- I need expensive extra HDD's for that project
- I won't pay USD 300 for a USD 120 HDD
NO ZFS under these global SSD / HDD market prices
Sources:
- Moi
- man ls(1)
- man lsd(1)
- man cp(1)
- man smartctl(8)
- man zfs(8)
#HDD #SSD #crash #no #warning #on #TV #filesystems #remote #backup #network #JBOD #SMART #programming #bacula
-
SSD death ☠️💀
I am busy rebuilding the data from an SSD which died all of a sudden from one moment to the next without giving any warnings in any manner in any way.
I always monitor SMART output of SSD drives & mechanical spinners (HDD)
I've not seen any smart output indicating imminante dead
This drive has acted like your girlfriend when she's just not in a good mood and without explanation says nothing to you in the morning, for hours.
This hard crash, means that the S.M.A.R.T. monitoring hardware didn't have proper Communications with the integrated circuits on the SSD.
The drive was always powered many times a month, never left without power for more than a week or so, thus that has not been a contributing factor to the SSD catastrophic failure
The drive itself is fairly small.
The data on this dead SSD I've backed up on remote drives connected in JBOD format to machines which I have running remotely.
No ZFS on my backup machines
- I want to run ZFS native
- that means running a BSD OS on those machines
- that also means I will need to backup / restore all data on those drives meaning
- I will need 200% of the used HDD / SSD space on those machines
- I need patience for that backup
- I need expensive extra HDD's for that project
- I won't pay USD 300 for a USD 120 HDD
NO ZFS under these global SSD / HDD market prices
Sources:
- Moi
- man ls(1)
- man lsd(1)
- man cp(1)
- man smartctl(8)
- man zfs(8)
#HDD #SSD #crash #no #warning #on #TV #filesystems #remote #backup #network #JBOD #SMART #programming #bacula
-
Год в проде с Ceph: как мы пришли к новой референсной архитектуре
Привет, Хабр! Меня зовут Игорь Шишкин, я руковожу отделом разработки облачной платформы и архитектором SDS в Рунити. Ранее я уже рассказывал про то, как мы выбирали SDS (Software Defined Storage), почему остановились на Ceph, а также о наших процессах в R&D. В этой статье , поделюсь, что мы поймали за год в продакшене, какие решения в дизайне кластеров оказались ошибочными, как это изменило нашу референсную архитектуру и к чему мы пришли в итоге.
https://habr.com/ru/companies/runity/articles/1021222/
#регоблако #ceph #s3 #hsdc #конфигурация #exhausted #jbod #hba #архитектура #кластер
-
Год в проде с Ceph: как мы пришли к новой референсной архитектуре
Привет, Хабр! Меня зовут Игорь Шишкин, я руковожу отделом разработки облачной платформы и архитектором SDS в Рунити. Ранее я уже рассказывал про то, как мы выбирали SDS (Software Defined Storage), почему остановились на Ceph, а также о наших процессах в R&D. В этой статье , поделюсь, что мы поймали за год в продакшене, какие решения в дизайне кластеров оказались ошибочными, как это изменило нашу референсную архитектуру и к чему мы пришли в итоге.
https://habr.com/ru/companies/runity/articles/1021222/
#регоблако #ceph #s3 #hsdc #конфигурация #exhausted #jbod #hba #архитектура #кластер
-
Год в проде с Ceph: как мы пришли к новой референсной архитектуре
Привет, Хабр! Меня зовут Игорь Шишкин, я руковожу отделом разработки облачной платформы и архитектором SDS в Рунити. Ранее я уже рассказывал про то, как мы выбирали SDS (Software Defined Storage), почему остановились на Ceph, а также о наших процессах в R&D. В этой статье , поделюсь, что мы поймали за год в продакшене, какие решения в дизайне кластеров оказались ошибочными, как это изменило нашу референсную архитектуру и к чему мы пришли в итоге.
https://habr.com/ru/companies/runity/articles/1021222/
#регоблако #ceph #s3 #hsdc #конфигурация #exhausted #jbod #hba #архитектура #кластер
-
Год в проде с Ceph: как мы пришли к новой референсной архитектуре
Привет, Хабр! Меня зовут Игорь Шишкин, я руковожу отделом разработки облачной платформы и архитектором SDS в Рунити. Ранее я уже рассказывал про то, как мы выбирали SDS (Software Defined Storage), почему остановились на Ceph, а также о наших процессах в R&D. В этой статье , поделюсь, что мы поймали за год в продакшене, какие решения в дизайне кластеров оказались ошибочными, как это изменило нашу референсную архитектуру и к чему мы пришли в итоге.
https://habr.com/ru/companies/runity/articles/1021222/
#регоблако #ceph #s3 #hsdc #конфигурация #exhausted #jbod #hba #архитектура #кластер
-
Someone here with more troubleshooting steps/ideas?
I have a refurbished LSI 9361-8i card and it doesn‘t detect any drives at all.
I checked 8643-8643 to a backplane - nothing.
I checked a 8643-4x8482 cable directly attached to the drives - which spin up on startup, but aren‘t recognized at all by the controller. Regardless of whether these are 2013 SATA or 2020 SATA drives.
Firmware is latest, and neither the #RAID nor #JBOD personality make a difference. -
Someone here with more troubleshooting steps/ideas?
I have a refurbished LSI 9361-8i card and it doesn‘t detect any drives at all.
I checked 8643-8643 to a backplane - nothing.
I checked a 8643-4x8482 cable directly attached to the drives - which spin up on startup, but aren‘t recognized at all by the controller. Regardless of whether these are 2013 SATA or 2020 SATA drives.
Firmware is latest, and neither the #RAID nor #JBOD personality make a difference. -
Someone here with more troubleshooting steps/ideas?
I have a refurbished LSI 9361-8i card and it doesn‘t detect any drives at all.
I checked 8643-8643 to a backplane - nothing.
I checked a 8643-4x8482 cable directly attached to the drives - which spin up on startup, but aren‘t recognized at all by the controller. Regardless of whether these are 2013 SATA or 2020 SATA drives.
Firmware is latest, and neither the #RAID nor #JBOD personality make a difference. -
Someone here with more troubleshooting steps/ideas?
I have a refurbished LSI 9361-8i card and it doesn‘t detect any drives at all.
I checked 8643-8643 to a backplane - nothing.
I checked a 8643-4x8482 cable directly attached to the drives - which spin up on startup, but aren‘t recognized at all by the controller. Regardless of whether these are 2013 SATA or 2020 SATA drives.
Firmware is latest, and neither the #RAID nor #JBOD personality make a difference. -
Someone here with more troubleshooting steps/ideas?
I have a refurbished LSI 9361-8i card and it doesn‘t detect any drives at all.
I checked 8643-8643 to a backplane - nothing.
I checked a 8643-4x8482 cable directly attached to the drives - which spin up on startup, but aren‘t recognized at all by the controller. Regardless of whether these are 2013 SATA or 2020 SATA drives.
Firmware is latest, and neither the #RAID nor #JBOD personality make a difference. -
All services are back online on the new JBOD. The new JBOD uses Luks + BTRFS R10 on top. It's purring! #btrfs #jbod #sysadmin #gnulinux #freesoftware #opensource #floss #debian
-
All services are back online on the new JBOD. The new JBOD uses Luks + BTRFS R10 on top. It's purring! #btrfs #jbod #sysadmin #gnulinux #freesoftware #opensource #floss #debian
-
All services are back online on the new JBOD. The new JBOD uses Luks + BTRFS R10 on top. It's purring! #btrfs #jbod #sysadmin #gnulinux #freesoftware #opensource #floss #debian
-
All services are back online on the new JBOD. The new JBOD uses Luks + BTRFS R10 on top. It's purring! #btrfs #jbod #sysadmin #gnulinux #freesoftware #opensource #floss #debian
-
All services are back online on the new JBOD. The new JBOD uses Luks + BTRFS R10 on top. It's purring! #btrfs #jbod #sysadmin #gnulinux #freesoftware #opensource #floss #debian
-
I bought a 16 TB #Seagate Ironwolf Pro drive and just got done installing it into my Sabrent 5 bay #JBOD.
When I turn it on it makes a strange sound and neither Disk Management nor DiskPart at the command prompt can see the drive to initialize or format it.
I don't have to do anything special to a NAS drive running as a regular non-raid HD do I?
My other four drives contain 8 TB barracudas and they have all worked perfectly fine.
Is this perhaps a bad drive?
-
I bought a 16 TB #Seagate Ironwolf Pro drive and just got done installing it into my Sabrent 5 bay #JBOD.
When I turn it on it makes a strange sound and neither Disk Management nor DiskPart at the command prompt can see the drive to initialize or format it.
I don't have to do anything special to a NAS drive running as a regular non-raid HD do I?
My other four drives contain 8 TB barracudas and they have all worked perfectly fine.
Is this perhaps a bad drive?
-
I bought a 16 TB #Seagate Ironwolf Pro drive and just got done installing it into my Sabrent 5 bay #JBOD.
When I turn it on it makes a strange sound and neither Disk Management nor DiskPart at the command prompt can see the drive to initialize or format it.
I don't have to do anything special to a NAS drive running as a regular non-raid HD do I?
My other four drives contain 8 TB barracudas and they have all worked perfectly fine.
Is this perhaps a bad drive?
-
I bought a 16 TB #Seagate Ironwolf Pro drive and just got done installing it into my Sabrent 5 bay #JBOD.
When I turn it on it makes a strange sound and neither Disk Management nor DiskPart at the command prompt can see the drive to initialize or format it.
I don't have to do anything special to a NAS drive running as a regular non-raid HD do I?
My other four drives contain 8 TB barracudas and they have all worked perfectly fine.
Is this perhaps a bad drive?
-
I bought a 16 TB #Seagate Ironwolf Pro drive and just got done installing it into my Sabrent 5 bay #JBOD.
When I turn it on it makes a strange sound and neither Disk Management nor DiskPart at the command prompt can see the drive to initialize or format it.
I don't have to do anything special to a NAS drive running as a regular non-raid HD do I?
My other four drives contain 8 TB barracudas and they have all worked perfectly fine.
Is this perhaps a bad drive?
-
Künftig mit 3,26 Petabyte: Western Digital packt noch zwei 32-TB-Platten mehr ins JBOD https://www.computerbase.de/news/storage/kuenftig-mit-3-26-petabyte-western-digital-packt-noch-zwei-32-tb-platten-mehr-ins-jbod.95057/ #WDC #JBOD #SC25
-
Künftig mit 3,26 Petabyte: Western Digital packt noch zwei 32-TB-Platten mehr ins JBOD https://www.computerbase.de/news/storage/kuenftig-mit-3-26-petabyte-western-digital-packt-noch-zwei-32-tb-platten-mehr-ins-jbod.95057/ #WDC #JBOD #SC25
-
Künftig mit 3,26 Petabyte: Western Digital packt noch zwei 32-TB-Platten mehr ins JBOD https://www.computerbase.de/news/storage/kuenftig-mit-3-26-petabyte-western-digital-packt-noch-zwei-32-tb-platten-mehr-ins-jbod.95057/ #WDC #JBOD #SC25
-
Künftig mit 3,26 Petabyte: Western Digital packt noch zwei 32-TB-Platten mehr ins JBOD https://www.computerbase.de/news/storage/kuenftig-mit-3-26-petabyte-western-digital-packt-noch-zwei-32-tb-platten-mehr-ins-jbod.95057/ #WDC #JBOD #SC25
-
Künftig mit 3,26 Petabyte: Western Digital packt noch zwei 32-TB-Platten mehr ins JBOD https://www.computerbase.de/news/storage/kuenftig-mit-3-26-petabyte-western-digital-packt-noch-zwei-32-tb-platten-mehr-ins-jbod.95057/ #WDC #JBOD #SC25
-
hey hey #Linux #FileSystem #ZFS #RAID #XFS entities! I'm looking for extremely opinionated discourses on alternatives to ZFS on Linux for slapping together a #JBOD ("Just a Bunch Of Disks", "Just a Buncha Old Disks", "Jesus! Buncha Old Disks!", etc) array.
I like ZFS but the fact that it's not in tree in-kernel is an issue for me. What I need most is reliability and stability (specifically regarding parity) here; integrity is the need. Read/write don't have to be blazingly fast (not that I'm mad about it).
I also have one #proxmox ZFS array where a raw disk image is stored for a #Qemu #VirtualMachine; in the VM, it's formatted to XFS. That "seems" fine in limited testing thus far (and seems fast?, so it does seem like the defaults got the striping correct) but I kind of hate how I have multiple levels of abstraction here.
I don't think there's been any change on the #BTRFS front re: raid-like array stability (I like and use BTRFS for single disk filesystems but) although I would love for that to be different.
I'm open to #LVM, etc, or whatever might help me stay in tree and up to date. Thank you! Boosts appreciated and welcome.
#techPosting -
hey hey #Linux #FileSystem #ZFS #RAID #XFS entities! I'm looking for extremely opinionated discourses on alternatives to ZFS on Linux for slapping together a #JBOD ("Just a Bunch Of Disks", "Just a Buncha Old Disks", "Jesus! Buncha Old Disks!", etc) array.
I like ZFS but the fact that it's not in tree in-kernel is an issue for me. What I need most is reliability and stability (specifically regarding parity) here; integrity is the need. Read/write don't have to be blazingly fast (not that I'm mad about it).
I also have one #proxmox ZFS array where a raw disk image is stored for a #Qemu #VirtualMachine; in the VM, it's formatted to XFS. That "seems" fine in limited testing thus far (and seems fast?, so it does seem like the defaults got the striping correct) but I kind of hate how I have multiple levels of abstraction here.
I don't think there's been any change on the #BTRFS front re: raid-like array stability (I like and use BTRFS for single disk filesystems but) although I would love for that to be different.
I'm open to #LVM, etc, or whatever might help me stay in tree and up to date. Thank you! Boosts appreciated and welcome.
#techPosting -
hey hey #Linux #FileSystem #ZFS #RAID #XFS entities! I'm looking for extremely opinionated discourses on alternatives to ZFS on Linux for slapping together a #JBOD ("Just a Bunch Of Disks", "Just a Buncha Old Disks", "Jesus! Buncha Old Disks!", etc) array.
I like ZFS but the fact that it's not in tree in-kernel is an issue for me. What I need most is reliability and stability (specifically regarding parity) here; integrity is the need. Read/write don't have to be blazingly fast (not that I'm mad about it).
I also have one #proxmox ZFS array where a raw disk image is stored for a #Qemu #VirtualMachine; in the VM, it's formatted to XFS. That "seems" fine in limited testing thus far (and seems fast?, so it does seem like the defaults got the striping correct) but I kind of hate how I have multiple levels of abstraction here.
I don't think there's been any change on the #BTRFS front re: raid-like array stability (I like and use BTRFS for single disk filesystems but) although I would love for that to be different.
I'm open to #LVM, etc, or whatever might help me stay in tree and up to date. Thank you! Boosts appreciated and welcome.
#techPosting -
hey hey #Linux #FileSystem #ZFS #RAID #XFS entities! I'm looking for extremely opinionated discourses on alternatives to ZFS on Linux for slapping together a #JBOD ("Just a Bunch Of Disks", "Just a Buncha Old Disks", "Jesus! Buncha Old Disks!", etc) array.
I like ZFS but the fact that it's not in tree in-kernel is an issue for me. What I need most is reliability and stability (specifically regarding parity) here; integrity is the need. Read/write don't have to be blazingly fast (not that I'm mad about it).
I also have one #proxmox ZFS array where a raw disk image is stored for a #Qemu #VirtualMachine; in the VM, it's formatted to XFS. That "seems" fine in limited testing thus far (and seems fast?, so it does seem like the defaults got the striping correct) but I kind of hate how I have multiple levels of abstraction here.
I don't think there's been any change on the #BTRFS front re: raid-like array stability (I like and use BTRFS for single disk filesystems but) although I would love for that to be different.
I'm open to #LVM, etc, or whatever might help me stay in tree and up to date. Thank you! Boosts appreciated and welcome.
#techPosting -
hey hey #Linux #FileSystem #ZFS #RAID #XFS entities! I'm looking for extremely opinionated discourses on alternatives to ZFS on Linux for slapping together a #JBOD ("Just a Bunch Of Disks", "Just a Buncha Old Disks", "Jesus! Buncha Old Disks!", etc) array.
I like ZFS but the fact that it's not in tree in-kernel is an issue for me. What I need most is reliability and stability (specifically regarding parity) here; integrity is the need. Read/write don't have to be blazingly fast (not that I'm mad about it).
I also have one #proxmox ZFS array where a raw disk image is stored for a #Qemu #VirtualMachine; in the VM, it's formatted to XFS. That "seems" fine in limited testing thus far (and seems fast?, so it does seem like the defaults got the striping correct) but I kind of hate how I have multiple levels of abstraction here.
I don't think there's been any change on the #BTRFS front re: raid-like array stability (I like and use BTRFS for single disk filesystems but) although I would love for that to be different.
I'm open to #LVM, etc, or whatever might help me stay in tree and up to date. Thank you! Boosts appreciated and welcome.
#techPosting -
Well, added a storage tier this evening.
A couple 4 Tb drives which I had laying about, as #JBOD (#MergerFS), for those media files which are "replaceable" and don't really need to be on #ZFS mirrors (~70 & ~16 Tb).
I now feed into #Plex from the different tiers using #OverlayFS.
Pretty neat. I can move things around without Plex noticing, select the tier in #Sonarr / #Radarr, and if one of those 4 Tb drives goes down, NBD and I can see what is missing.
More iops don't hurt either.
-
Well, added a storage tier this evening.
A couple 4 Tb drives which I had laying about, as #JBOD (#MergerFS), for those media files which are "replaceable" and don't really need to be on #ZFS mirrors (~70 & ~16 Tb).
I now feed into #Plex from the different tiers using #OverlayFS.
Pretty neat. I can move things around without Plex noticing, select the tier in #Sonarr / #Radarr, and if one of those 4 Tb drives goes down, NBD and I can see what is missing.
More iops don't hurt either.
-
Well, added a storage tier this evening.
A couple 4 Tb drives which I had laying about, as #JBOD (#MergerFS), for those media files which are "replaceable" and don't really need to be on #ZFS mirrors (~70 & ~16 Tb).
I now feed into #Plex from the different tiers using #OverlayFS.
Pretty neat. I can move things around without Plex noticing, select the tier in #Sonarr / #Radarr, and if one of those 4 Tb drives goes down, NBD and I can see what is missing.
More iops don't hurt either.
-
Well, added a storage tier this evening.
A couple 4 Tb drives which I had laying about, as #JBOD (#MergerFS), for those media files which are "replaceable" and don't really need to be on #ZFS mirrors (~70 & ~16 Tb).
I now feed into #Plex from the different tiers using #OverlayFS.
Pretty neat. I can move things around without Plex noticing, select the tier in #Sonarr / #Radarr, and if one of those 4 Tb drives goes down, NBD and I can see what is missing.
More iops don't hurt either.
-
Well, added a storage tier this evening.
A couple 4 Tb drives which I had laying about, as #JBOD (#MergerFS), for those media files which are "replaceable" and don't really need to be on #ZFS mirrors (~70 & ~16 Tb).
I now feed into #Plex from the different tiers using #OverlayFS.
Pretty neat. I can move things around without Plex noticing, select the tier in #Sonarr / #Radarr, and if one of those 4 Tb drives goes down, NBD and I can see what is missing.
More iops don't hurt either.
-
CW: Server hardware rabbit hole, long, acronym hell
One of my server nodes is a sort of "leftover consumer parts in a 4U chassis" sort of deal. It's never been especially good but got 24 HDDs online in a pinch to utilize random drives lying around as a #JBOD. This week it started misbehaving more than usual due to a bad software bug that consumed its meager 16G of RAM like it was nothing, so it's been drained ever since. Ordered a #BLIKVM to be able to rescue it while traveling since it also likes to hard lock. Today was the day to install it but turns out I'm out of PCI-E slots, now suddenly needing a fifth. Long story short, I lost my last shreds of five a duck about rescuing it and now there's a bunch of eBay orders incoming including a supermicro E-ATX mobo, a couple of Xeon Silvers and 128G of RAM plus the bits and bobs to hook it all up. (1/2)
-
CW: Server hardware rabbit hole, long, acronym hell
One of my server nodes is a sort of "leftover consumer parts in a 4U chassis" sort of deal. It's never been especially good but got 24 HDDs online in a pinch to utilize random drives lying around as a #JBOD. This week it started misbehaving more than usual due to a bad software bug that consumed its meager 16G of RAM like it was nothing, so it's been drained ever since. Ordered a #BLIKVM to be able to rescue it while traveling since it also likes to hard lock. Today was the day to install it but turns out I'm out of PCI-E slots, now suddenly needing a fifth. Long story short, I lost my last shreds of five a duck about rescuing it and now there's a bunch of eBay orders incoming including a supermicro E-ATX mobo, a couple of Xeon Silvers and 128G of RAM plus the bits and bobs to hook it all up. (1/2)
-
CW: Server hardware rabbit hole, long, acronym hell
One of my server nodes is a sort of "leftover consumer parts in a 4U chassis" sort of deal. It's never been especially good but got 24 HDDs online in a pinch to utilize random drives lying around as a #JBOD. This week it started misbehaving more than usual due to a bad software bug that consumed its meager 16G of RAM like it was nothing, so it's been drained ever since. Ordered a #BLIKVM to be able to rescue it while traveling since it also likes to hard lock. Today was the day to install it but turns out I'm out of PCI-E slots, now suddenly needing a fifth. Long story short, I lost my last shreds of five a duck about rescuing it and now there's a bunch of eBay orders incoming including a supermicro E-ATX mobo, a couple of Xeon Silvers and 128G of RAM plus the bits and bobs to hook it all up. (1/2)
-
CW: Server hardware rabbit hole, long, acronym hell
One of my server nodes is a sort of "leftover consumer parts in a 4U chassis" sort of deal. It's never been especially good but got 24 HDDs online in a pinch to utilize random drives lying around as a #JBOD. This week it started misbehaving more than usual due to a bad software bug that consumed its meager 16G of RAM like it was nothing, so it's been drained ever since. Ordered a #BLIKVM to be able to rescue it while traveling since it also likes to hard lock. Today was the day to install it but turns out I'm out of PCI-E slots, now suddenly needing a fifth. Long story short, I lost my last shreds of five a duck about rescuing it and now there's a bunch of eBay orders incoming including a supermicro E-ATX mobo, a couple of Xeon Silvers and 128G of RAM plus the bits and bobs to hook it all up. (1/2)
-
Дата-центр ЦЕРН на 1 эксабайт: как хранят данные
Большая наука невозможна без больших вычислений. По крайней мере, это утверждение справедливо в ядерной физике. Мы видим, что львиная доля самых мощных суперкомпьютеров установлена именно в научных учреждениях, в том числе университетах. Сразу несколько отраслей современной науки напрямую зависят от компьютерных расчётов и анализа больших данных, собранных по результатам наблюдений. Например, в Европейском центре ядерных исследований (ЦЕРН) работает один из крупнейших дата-центров в мире. Без этого вычислительного кластера мы бы искали бозон Хиггса ещё очень долго, а Стандартная модель так бы и осталась незавершённой.
https://habr.com/ru/companies/ruvds/articles/822681/
#ruvds_статьи #поиск_лекарств #CERN #ЦЕРН #БАК #Большой_взрыв #зептосекунда #JBOD #Ultrastar_Data102 #CMS #ALICE #ATLAS #LHCb #бозон_Хиггса #European_Grid_Infrastrucrure #OpenStack #CephFS #CASTOR #XRootD #Swift #RADOS_block_devices #RBD #сверхтекучий_гелий
-
Дата-центр ЦЕРН на 1 эксабайт: как хранят данные
Большая наука невозможна без больших вычислений. По крайней мере, это утверждение справедливо в ядерной физике. Мы видим, что львиная доля самых мощных суперкомпьютеров установлена именно в научных учреждениях, в том числе университетах. Сразу несколько отраслей современной науки напрямую зависят от компьютерных расчётов и анализа больших данных, собранных по результатам наблюдений. Например, в Европейском центре ядерных исследований (ЦЕРН) работает один из крупнейших дата-центров в мире. Без этого вычислительного кластера мы бы искали бозон Хиггса ещё очень долго, а Стандартная модель так бы и осталась незавершённой.
https://habr.com/ru/companies/ruvds/articles/822681/
#ruvds_статьи #поиск_лекарств #CERN #ЦЕРН #БАК #Большой_взрыв #зептосекунда #JBOD #Ultrastar_Data102 #CMS #ALICE #ATLAS #LHCb #бозон_Хиггса #European_Grid_Infrastrucrure #OpenStack #CephFS #CASTOR #XRootD #Swift #RADOS_block_devices #RBD #сверхтекучий_гелий
-
Дата-центр ЦЕРН на 1 эксабайт: как хранят данные
Большая наука невозможна без больших вычислений. По крайней мере, это утверждение справедливо в ядерной физике. Мы видим, что львиная доля самых мощных суперкомпьютеров установлена именно в научных учреждениях, в том числе университетах. Сразу несколько отраслей современной науки напрямую зависят от компьютерных расчётов и анализа больших данных, собранных по результатам наблюдений. Например, в Европейском центре ядерных исследований (ЦЕРН) работает один из крупнейших дата-центров в мире. Без этого вычислительного кластера мы бы искали бозон Хиггса ещё очень долго, а Стандартная модель так бы и осталась незавершённой.
https://habr.com/ru/companies/ruvds/articles/822681/
#ruvds_статьи #поиск_лекарств #CERN #ЦЕРН #БАК #Большой_взрыв #зептосекунда #JBOD #Ultrastar_Data102 #CMS #ALICE #ATLAS #LHCb #бозон_Хиггса #European_Grid_Infrastrucrure #OpenStack #CephFS #CASTOR #XRootD #Swift #RADOS_block_devices #RBD #сверхтекучий_гелий
-
I have an idea for creating #JBOD boxes that stack on top of each other so you can fairly quickly add more storage to your #homelab
Each box would have its own small PSU to power it and what else is needed to connect to the main server. However, I am lacking in hardware knowledge.I was thinking about each box having a SATA/SAS card and the PSU in it, then connecting that via an optic connection, mini SAS or something similar to the main host. What hardware should I be looking for?
-
I have an idea for creating #JBOD boxes that stack on top of each other so you can fairly quickly add more storage to your #homelab
Each box would have its own small PSU to power it and what else is needed to connect to the main server. However, I am lacking in hardware knowledge.I was thinking about each box having a SATA/SAS card and the PSU in it, then connecting that via an optic connection, mini SAS or something similar to the main host. What hardware should I be looking for?
-
I have an idea for creating #JBOD boxes that stack on top of each other so you can fairly quickly add more storage to your #homelab
Each box would have its own small PSU to power it and what else is needed to connect to the main server. However, I am lacking in hardware knowledge.I was thinking about each box having a SATA/SAS card and the PSU in it, then connecting that via an optic connection, mini SAS or something similar to the main host. What hardware should I be looking for?
-
I received my Sabrent #Jbod DAS. I got it setup and threw in a couple of 8TB drives.
What is the best way to do a #Windows 10 mass file transfer from my old usb drives?
From what I have read it sounds like Robocopy in Command Prompt is slightly faster and better than doing a standard cut and paste.
Any recommendations or tips?