That sounds like a fun experimentation platform. Power shouldn't be a problem except for the bill, the PSUs are meant to be redundant rather than for combined output. You probably won't draw more than 300 watts most of the time. The drive controller probably has staggered spin-up to prevent over current conditions. The bigger factor in my experience is the noise, expect a jet engine during boot and a dull roar the rest of the time. SuperMicro boards often allow you to set the fans lower, but the PSUs will be loud all the time. If I were you, I'd install TrueNAS or ProxMox and play around with it. You may be able to triage the issues. You may need to set the drive controllers to HBA mode or flash an HBA firmware to them. It's probably not worth running long term with SandyBridge, power to performance is pretty bad.
Hardware
All things related to technology hardware, with a focus on computing hardware.
Some other hardware communities across Lemmy:
- Augmented Reality
- Gaming Laptops
- Laptops
- Linux Hardware
- Linux Phones
- Monitors
- Raspberry Pi
- Retro Computing
- Virtual Reality
Rules (Click to Expand):
-
Follow the Lemmy.world Rules - https://mastodon.world/about
-
Be kind. No bullying, harassment, racism, sexism etc. against other users.
-
No Spam, illegal content, or NSFW content.
-
Please stay on topic, adjacent topics (e.g. software) are fine if they are strongly relevant to technology hardware. Another example would be business news for hardware-focused companies.
-
Please try and post original sources when possible (as opposed to summaries).
-
If posting an archived version of the article, please include a URL link to the original article in the body of the post.
Icon by "icon lauk" under CC BY 3.0
Thanks! So you don't think I'm gonna blow my breakers? Alright, we will see.
"TrueNAS or ProxMox ... triage the issues. ... set the drive controllers to HBA mode or flash an HBA firmware to them."
- Right. I've installed TrueNAS on em a couple times previously. They were running ZFS software raid. So ... maybe just use the raid controller instead? Honestly, I've not tried that yet.
- I've installed a couple different Supermicro firmware versions to them. Got em up to date with the HTML5 (not Java) remote console. That did not fix the crashes. Supermicro's driver download services are a bit weird, perhaps I missed something they need.
- All of my prior troubleshooting has been from 1200 miles away. Yes, I'll do my best to triage. Spin up an OS, and then one by one, check each drive and bay.
.
I'm gonna enjoy working with them, but I have a couple Dell Gen13 (Broadwell) servers already in my lab. My main host, running Proxmox, is a Dell R230 8vcpu 64gb. I run up to 8 VMs there, and its really all I need.
I never run my Dell R430 80vcpu 180gb. No need for that much juice. I really enjoyed upgrading it to the max, and now I don't use it. After I finish shopping out these new Supermicro monsters, I'm gonna be happy to sell em off to somebody that wants a big chassis with a bunch of disks.
They were running ZFS software raid. So ... maybe just use the raid controller instead?
It is generally a bad idea to do that nowadays, because it ties you forever to that controller. If it dies you will need to find an exact replacement or accept that the whole array is lost. With software raid you can run any hardware.
Wendell from Level1Techs is a good reference:
https://youtube.com/watch?v=l55GfAwa8RI
https://youtube.com/watch?v=Q_JOtEBFHDs
BTW: good score, you will have fun with those for sure.
Those were a couple really good vids. I've never been a storage specialist, but I do manage all the storage for a small MSP, so I'm not ignorant. Like, I know ZFS pretty darn well, and I apparently collect storage servers for fun.
That Wendell guy tho, he really knows his shit.
I don't know that I got any final answers from him, but it left me with a lot to consider.
Honestly, a good chunk of what he had to say had me questioning my build with my Highpoint SSD7540 PCIe 4.0 x16 / 8x M.2 Ports NVMe card ... on a completely different machine, a build I was quite satisfied with until now. (It's on my gamer/server, my main box.)
I put a lot of research and performance testing into the Highpoint build. It's an 8x card supporting Gen 4 NVMe in an (actually) 16 lane slot. I populated 4 bays. Each stick gets 4 lanes, which is great for Gen 4. (I figured some day in the future when NVMe gen4 is dirt cheap, I'll fill the rest, and each stick will just get 2 lanes.) After some testing, I decided to use the hardware RAID controller on the card. Considering what old Wendell had to say, I suspect that perhaps it should be software raid instead ... still, that would mean relying on Windows to run the raid, and I don't trust Windows. And then there's the fact that after reviewing all the spec sheets, I've realized there's a lot I don't know about the card. But the Highpoint smokes, and I mostly just store video games there. So maybe bit-rot isn't a big deal anyway.
All very interesting stuff. Thanks.
1000 watts is ~10amp at 120v (I know what it actually is, I'm using round figures to make it easy to grok). Plus isn't this a 220V device?
If it's 120V, it's likelyntonhave a different plug design, I forget the number.
That said, it's gonna pull hundreds of watts at idle.
I'd sell much of it and use that to buy what I really want. I'd consider keeping drives if they're a useful size for you.
It'll probably only draw a couple hundred watts, even if substantially full. The real thing to tackle here is heat. It will definitely warm up your room.
Source: I have a 26 drive array and it only draws around 350 W with all drives spinning during parity check. Usually they aren't all spinning though, and it goes down to around 220 W idle. That's with an Epyc 7702 and a 25GB nic as well.
Ok cool. Ya we closed our office, and I work from home now, so my only bench is my livingroom table. It's gonna be an interesting moment when I power the first one up, but I'm glad to hear they will probably run.
Yeah, unless you have really jacked-up wiring (like, really bad), then the worst that would happen is tripping the breaker. But I don't think you'll pull enough amps to do that with this setup alone. Is there anything else using a lot of power on this breaker?
Nah, I live alone. Just me, two cats, and my robots. I can turn everything off if I want.
I pulled the rails off today, packed those up. You know, so I don't slice my leg open walking by them.
I'll plug one in this week and get started with it.
We run 120v in our colos. Standard plugs. I haven't plugged it in yet, but I don't think the cable or voltage will be an issue.
The drives ... well I've got at least 50 drives here. (There were a few spares too.)
They are all WD Reds 4tb and above. Great for this application, but my homelab servers both have several TB of unused redundant storage, including SSD on each.
So I don't need em, but that many drives ... hell, if I sold just the drives as refurb on ebay, I think I could make like $2k.
I do anticipate selling the two devices with the drives as a complete kit.
I hope I can sell them locally (Portland OR). These beasts would be a b!tch to ship.