this post was submitted on 25 Apr 2026
21 points (95.7% liked)

Hardware

7124 readers
198 users here now

All things related to technology hardware, with a focus on computing hardware.


Some other hardware communities across Lemmy:


Rules (Click to Expand):

  1. Follow the Lemmy.world Rules - https://mastodon.world/about

  2. Be kind. No bullying, harassment, racism, sexism etc. against other users.

  3. No Spam, illegal content, or NSFW content.

  4. Please stay on topic, adjacent topics (e.g. software) are fine if they are strongly relevant to technology hardware. Another example would be business news for hardware-focused companies.

  5. Please try and post original sources when possible (as opposed to summaries).

  6. If posting an archived version of the article, please include a URL link to the original article in the body of the post.


Icon by "icon lauk" under CC BY 3.0

founded 2 years ago
MODERATORS
 

I just got two of these. Fully loaded. Disks, sleds, rails.
Fiber cards + 4 onboard NICs and 4 more on another card.
Its a dual proc board with a bunch of ram slots. (I think its Sandybridge procs, DDR3.)
20 HDD bays. These things are (older) beastly storage boxen.

Board Manufacturer: Supermicro
Chassis Part Number: CSE-846BTS-R920BP
Board Part Num: X9DRi-LN4+/X9DR3-LN4+
Product PartNum: SSG-6047R-E1R24N

I got them because they were at a remote colo, and they crashed a bunch of times.
They cost us more downtime than they were worth.
I happened to be in town and made my boss an offer.
He didn't have to pay for e-waste fees, and I removed his problem for the low, low cost of $0.

So now they are my problem.
I don't need 200 TB of redundant storage. I'm gonna shop em out and sell em.
No idea if the dual 920 watt psu will blow my apt breakers. Takes a lot of juice to spin 20 hdds.

So far, I've hauled them across half the US, up my stairs, and admired them.
I found a youtuber 'Art of the Server' with some helpful vids. Watched a bunch.
No real idea what I'm doing next.

I've configured them several times in the past. They always died after months of steady service.
Dead disks, etc. Maybe bad controllers?
A fault that intermittent is hard to diagnose, but they are in front of me now.
I can do whatever I need to. These are complicated devices.
My original plan of teardown and rebuild seems unwise now.

I'm interested in any practical feedback.

top 10 comments
sorted by: hot top controversial new old
[–] SlightlyNormal@lemmy.world 5 points 2 days ago (1 children)

That sounds like a fun experimentation platform. Power shouldn't be a problem except for the bill, the PSUs are meant to be redundant rather than for combined output. You probably won't draw more than 300 watts most of the time. The drive controller probably has staggered spin-up to prevent over current conditions. The bigger factor in my experience is the noise, expect a jet engine during boot and a dull roar the rest of the time. SuperMicro boards often allow you to set the fans lower, but the PSUs will be loud all the time. If I were you, I'd install TrueNAS or ProxMox and play around with it. You may be able to triage the issues. You may need to set the drive controllers to HBA mode or flash an HBA firmware to them. It's probably not worth running long term with SandyBridge, power to performance is pretty bad.

[–] dbtng@eviltoast.org 3 points 2 days ago* (last edited 2 days ago) (1 children)

Thanks! So you don't think I'm gonna blow my breakers? Alright, we will see.

"TrueNAS or ProxMox ... triage the issues. ... set the drive controllers to HBA mode or flash an HBA firmware to them."

  • Right. I've installed TrueNAS on em a couple times previously. They were running ZFS software raid. So ... maybe just use the raid controller instead? Honestly, I've not tried that yet.
  • I've installed a couple different Supermicro firmware versions to them. Got em up to date with the HTML5 (not Java) remote console. That did not fix the crashes. Supermicro's driver download services are a bit weird, perhaps I missed something they need.
  • All of my prior troubleshooting has been from 1200 miles away. Yes, I'll do my best to triage. Spin up an OS, and then one by one, check each drive and bay.

.
I'm gonna enjoy working with them, but I have a couple Dell Gen13 (Broadwell) servers already in my lab. My main host, running Proxmox, is a Dell R230 8vcpu 64gb. I run up to 8 VMs there, and its really all I need.
I never run my Dell R430 80vcpu 180gb. No need for that much juice. I really enjoyed upgrading it to the max, and now I don't use it. After I finish shopping out these new Supermicro monsters, I'm gonna be happy to sell em off to somebody that wants a big chassis with a bunch of disks.

[–] fulg@lemmy.world 6 points 2 days ago* (last edited 2 days ago) (1 children)

They were running ZFS software raid. So ... maybe just use the raid controller instead?

It is generally a bad idea to do that nowadays, because it ties you forever to that controller. If it dies you will need to find an exact replacement or accept that the whole array is lost. With software raid you can run any hardware.

Wendell from Level1Techs is a good reference:

https://youtube.com/watch?v=l55GfAwa8RI

https://youtube.com/watch?v=Q_JOtEBFHDs

BTW: good score, you will have fun with those for sure.

[–] dbtng@eviltoast.org 2 points 1 day ago* (last edited 1 day ago)

Those were a couple really good vids. I've never been a storage specialist, but I do manage all the storage for a small MSP, so I'm not ignorant. Like, I know ZFS pretty darn well, and I apparently collect storage servers for fun.
That Wendell guy tho, he really knows his shit.
I don't know that I got any final answers from him, but it left me with a lot to consider.

Honestly, a good chunk of what he had to say had me questioning my build with my Highpoint SSD7540 PCIe 4.0 x16 / 8x M.2 Ports NVMe card ... on a completely different machine, a build I was quite satisfied with until now. (It's on my gamer/server, my main box.)

I put a lot of research and performance testing into the Highpoint build. It's an 8x card supporting Gen 4 NVMe in an (actually) 16 lane slot. I populated 4 bays. Each stick gets 4 lanes, which is great for Gen 4. (I figured some day in the future when NVMe gen4 is dirt cheap, I'll fill the rest, and each stick will just get 2 lanes.) After some testing, I decided to use the hardware RAID controller on the card. Considering what old Wendell had to say, I suspect that perhaps it should be software raid instead ... still, that would mean relying on Windows to run the raid, and I don't trust Windows. And then there's the fact that after reviewing all the spec sheets, I've realized there's a lot I don't know about the card. But the Highpoint smokes, and I mostly just store video games there. So maybe bit-rot isn't a big deal anyway.

All very interesting stuff. Thanks.

[–] Onomatopoeia@lemmy.cafe 1 points 2 days ago* (last edited 1 day ago) (2 children)

1000 watts is ~10amp at 120v (I know what it actually is, I'm using round figures to make it easy to grok). Plus isn't this a 220V device?

If it's 120V, it's likelyntonhave a different plug design, I forget the number.

That said, it's gonna pull hundreds of watts at idle.

I'd sell much of it and use that to buy what I really want. I'd consider keeping drives if they're a useful size for you.

[–] CorrectAlias@piefed.blahaj.zone 3 points 2 days ago (1 children)

It'll probably only draw a couple hundred watts, even if substantially full. The real thing to tackle here is heat. It will definitely warm up your room.

Source: I have a 26 drive array and it only draws around 350 W with all drives spinning during parity check. Usually they aren't all spinning though, and it goes down to around 220 W idle. That's with an Epyc 7702 and a 25GB nic as well.

[–] dbtng@eviltoast.org 2 points 2 days ago (1 children)

Ok cool. Ya we closed our office, and I work from home now, so my only bench is my livingroom table. It's gonna be an interesting moment when I power the first one up, but I'm glad to hear they will probably run.

[–] CorrectAlias@piefed.blahaj.zone 2 points 1 day ago (1 children)

Yeah, unless you have really jacked-up wiring (like, really bad), then the worst that would happen is tripping the breaker. But I don't think you'll pull enough amps to do that with this setup alone. Is there anything else using a lot of power on this breaker?

[–] dbtng@eviltoast.org 2 points 1 day ago

Nah, I live alone. Just me, two cats, and my robots. I can turn everything off if I want.
I pulled the rails off today, packed those up. You know, so I don't slice my leg open walking by them.
I'll plug one in this week and get started with it.

[–] dbtng@eviltoast.org 1 points 2 days ago

We run 120v in our colos. Standard plugs. I haven't plugged it in yet, but I don't think the cable or voltage will be an issue.

The drives ... well I've got at least 50 drives here. (There were a few spares too.)
They are all WD Reds 4tb and above. Great for this application, but my homelab servers both have several TB of unused redundant storage, including SSD on each.
So I don't need em, but that many drives ... hell, if I sold just the drives as refurb on ebay, I think I could make like $2k.

I do anticipate selling the two devices with the drives as a complete kit.
I hope I can sell them locally (Portland OR). These beasts would be a b!tch to ship.