EpicFailGuy

joined 2 years ago
[–] EpicFailGuy@lemmy.world 2 points 1 week ago (1 children)

Not to butt in into your conversation, just wanted to drop that me and my colleagues use what we call the "clone cars" method to combat our company's naming scheme

So for example we dubbed CAPROD01 "Cappy" NASPROD01 became "Nasir" LTPDEV02 became "Luigi" (because he's always number 2)

Of course in written communication we use the full name (which is much less of an inconvenience) and we always double check in conversation or spell out full names before doing anything critical

[–] EpicFailGuy@lemmy.world 1 points 1 week ago

good bot pats head

[–] EpicFailGuy@lemmy.world 3 points 1 week ago

Hail Catler!

[–] EpicFailGuy@lemmy.world 1 points 1 week ago

I mean, you can always install Proxmox on Debian XD

or for that matter LXD ... it's all kinda the same.

Gotta love OSS

[–] EpicFailGuy@lemmy.world 2 points 1 week ago (1 children)

Haven't heard the gossip, please pray tell.

[–] EpicFailGuy@lemmy.world 1 points 1 week ago

That's AWESOME, I also named my NAS Atlas ... because it carries the weight of all my backups

Good call on those names, you're giving me some pretty cool ideas for my next servers

[–] EpicFailGuy@lemmy.world 1 points 1 week ago

There are a TON of different tutorials and videos.

If you're looking for a beginner friendly interface for your servers; I recommend "Cockpit" you just "sudo apt-get install cockpit" and it gives you a nice to use web interface to manage most of your servers, you can then install plugins as needed, for example you can install net bird or Pangolin to make it accessible from the internet.

If you want something more like what I'm doing here (Virtualization) you can try Canonical's version of this which runs on ubuntu, They're called LXD https://canonical.com/lxd/manage

Basically they're tiny ritualized linux instances inside of your main ubuntu server (Containers) with their own kernel so that changes on the base server don't bother your other apps.

[–] EpicFailGuy@lemmy.world 0 points 1 week ago

🤣 do you switch to elf names after the first 12?

[–] EpicFailGuy@lemmy.world 1 points 1 week ago

Very good advice, also backup daily and test for backups often !!!!

[–] EpicFailGuy@lemmy.world 1 points 1 week ago

Amen, feels cold and unimaginative

[–] EpicFailGuy@lemmy.world 0 points 1 week ago

Close, LCC. I do have a portrainer instance for docker images, but I like the extra control that San lxc gives you

 

What's everyone's server naming scheme?

 

Apartment dweller here, I can't store it outside and I don't have a garage, I was getting tired of just having it in the living room so I came up with this slightly better idea.

Anyone have any nice setups for keeping your bikes in your living spaces?

 

Poor Fernando ... he can't fucking win

 

I have a home lab I use for learning and to self host a couple of services for me and my extended family.

  • Nextcloud instance with about 1TB
  • Couple of websites
  • Couple of game servers

I'm running off an R430 with twin E5-2620 V3s 128GB and spinning rust storage.

When I deployed NC I did not think it thru, and I stored all the data locally, which causes the instance to be too big to backup normally.

As a solution, I've split the NC software into it's own LXC and a NAS into another and I'm thinking about hosting a cheap NUC NAS to rsync the files.

I would also like to distribute the load of my server into separate nodes so I can get more familiar with HA and possibly hyper converged infrastructure.

I would also like to have wo nodes locally to be able to work on one without bringing down services.

Any advice / tips?

Should I skip the NAS and go straight into Ceph?

Would 3x NUCs with Intel i5 or i7ths and 32Gb Ram be enough?

Would I be better off with 3x pizza box servers like R220s or DL20s?

Storage wise I'm trying to decide between m2 to Sata adapter like this [](

) and a mixtures of SSDs and spinning rust. Or

Otherwise would I be better off with SFF?

Otherwise I was considering a single 24 bay disk array with an LSI card in IT mode, but I'm inexperienced with those and I'm not sure about power usage / noise. (the rack does sit next to my workstation)

And yes you can put an LSI card on a NUC surprisingly (This looks like a VERY fun project) https://github.com/NKkrisz/HomeLab/blob/main/markdown%2FLenovo_M720Q_Setup.md

Plus, most likely I would not expand the storage past 5 or 10 TB on each node.

Additionally; I'm looking at cost per watt (current server runs at 168w 90% of the time, looks like those tiny NUCs run about 25W or so and the SFF 50-75W depending on what they have. The shallow depth servers also idle at 25-50 depending on storage and processor options.

I also have a 12U rack at home and I would very much like to keep things racked and neat. It seems a lot easier to rack the NUCs than it would be to do with SFF cases.

Obviously I'm OK with buying new hardware (I'll be selling the current one once I migrate) that's part of the "learning" experience.

Any advice or experience you can share would be highly appreciated.

Thanks /c/selfhosted

 

Hello Friendos

I'm a security / cloud engineer and I've had this lab for about 6 months now. In the last few weeks I've decided to start using it to self host some "production" services for me and my loved ones (extended family of 15) Mainly a next cloud instance that serves as our "picture vault"

The hardware is a poweredge R430 with twin ES-2620's and 128 GBs. It has 8x1TB 2.5

HDDs

This thing ended up being really overpowered for what I use it and I feel like by now I have explored everything I wanted to in this hardware. I was thinking about laterally scaling to R230s so I could play with load balancing and HA.

However these servers only have 2-4 drive bays, and I have no experience with DAS.

Can you guys help with some links? I'm researching DAS enclosures. I understand that any server with a PCI slot can take a SAS card, and any SAS enclosure is compatible.

Can you guys foresee any issue with a server as small as an R230 connecting to a SAS DAS?

I see that DAS enclosures have multiple connections per module, would I be able to connect multiple servers to the same module? or is it one server per connection and it can't be shared?

If I have to share the connection, I would have to host a NAS (I probably should anyways) and will have to upgrade my switch from gigabit to 10G

Would also appreciate some other recommendations for small form factor servers that can be bought for cheap. (18 inches or shorter)

Pic of current setup for attention ... don't judge my PC case :) 3U chassis for it is on the mail.

 
view more: next ›