| greenjeans | Have never messed with nginx at all | 00:02 |
|---|---|---|
| Xenguy | That's great greenjeans , thank you! | 00:12 |
| Xenguy | Instructions to set up a package mirror: https://pkgmaster.devuan.org/devuan_mirror_walkthrough.txt | 00:12 |
| Xenguy | Instructions to set up an ISO mirror: https://files.devuan.org/MIRRORS.txt | 00:12 |
| Xenguy | .oO( Should we be linking these files from the web site?) | 00:14 |
| greenjeans | Cool, going over the documentation now, i'm sure i'll have some questions along the way, but it seems fairly straightforward | 00:14 |
| golinux | Probably. Was just thinking that myself . . . | 00:14 |
| Xenguy | Would love to hear whatever stories ensue from you mirror experience : -) | 00:15 |
| Xenguy | golinux, thanks, ACK | 00:15 |
| greenjeans | It ought to be blazing fast, the system there is unused like 99.9% of the time, seems smart to do something useful with it | 00:18 |
| greenjeans | My long-term evil-genius plan is to convert this whole city to Devuan linux, and it's do-able, I literally have a key to city hall ;) | 00:21 |
| fsmithred | Xenguy, fdo is linked, and on the front page of fdo is MIRRORS.txt and README.txt | 00:23 |
| fsmithred | the one for package mirrors is harder to find, but the few who want to do that (or iso mirrors) come and talk to us here, forum or mailing list. | 00:27 |
| Xenguy | fsmithred, I'm grepping but can't find it, what page(s) is FDO linking from? | 00:30 |
| fsmithred | I guess it's not. Just the individual mirrors are listed, but those should all have the same file. | 00:34 |
| Xenguy | Okay, gotcha | 00:35 |
| Xenguy | So adding links to those 2 sites is still on the table I suppose | 00:36 |
| fsmithred | yeah, I don't rule it out. | 00:36 |
| fsmithred | I don't think it's a bad thing that people have to come looking for us and talk with us before they set up a mirror. | 00:38 |
| Xenguy | It's a good point. | 00:39 |
| fsmithred | if there's a problem, it helps to know who to talk to | 00:40 |
| Xenguy | It might be likely that when they go to set up a mirror, they'd try to contact someone in the Devuan project to keep us in the loop, get hooked up to the web site etc., so that can't hurt either | 00:41 |
| Xenguy | They're likely going to reach out in such situations I assume | 00:42 |
| onefang | greenjeans: For package mirrors I'm the one to talk to. I'm almost finished with a series of house moves, just the last lot of unpacking to do. So I was planning on getting stuck into my backlog of work starting early next week, or the weekend maybe. | 01:30 |
| onefang | One of those things in the backlog is to finish sorting out our wiki, where I think it was chomwitt that has already put together some docs about our mirrors, as have I. I plan to pull that together, polish it, and make it easier to find. | 01:32 |
| onefang | Today I have upgrades to deal with. | 01:34 |
| Xenguy | onefang, I want to float the idea that the wiki (whenever it arrives) should be about 'user generated documentation' that is not simply duplication of the web site content... | 02:05 |
| Xenguy | Perhaps like forum content, only collected in more organized wiki format... | 02:06 |
| onefang | I point to the web site instead of duplicating it. Most of what is in the existing wikis is user generated documentation. | 02:06 |
| Xenguy | Basically I'd be inclined to discourage duplication of the web site content, other than links back to the official web site | 02:07 |
| Xenguy | This is the way | 02:07 |
| greenjeans | onefang: Totally understand the move as we did that not long ago ourselves, no hurry, I'm still in concept stage here, I need to order some parts and put together a server first anyway, and it being summertime i'll be busy with house repairs and garden for the couple of months | 02:10 |
| greenjeans | I assembled a bunch of desktops back in the day, but never a dedicated server, any tips and suggestions on components and hardware would be really helpful | 02:14 |
| joerg | enterprise storage, *lots* of RAM | 04:00 |
| joerg | maybe redundant PSUs as it's always the PSUs that go POOf | 04:00 |
| joerg | s/ storage / drives / | 04:02 |
| onefang | ECC RAM if you can get it. | 04:03 |
| joerg | and a "small" SSD for system while you keep the bulk of data on a separate (enterprise RAID1?) HDD. That's whyt I do | 04:07 |
| joerg | LOL, what the heck?! my Seagate Constellation ES.3 ST2000NM0033 suffer an overrun in smartctl, wend down from ~90kh last time I checked to 422h | 04:15 |
| joerg | or was it even more? some 10 years | 04:15 |
| joerg | sda (SSD) 103859 --- Power-on Hours | 04:17 |
| tempforever | I've had a few drives power-on hours overflow | 15:15 |
| tempforever | not sure they ever reached the 100k mark though | 15:15 |
| bb|hcb | joerg: (re drive becoming young again) And you discovered the fountain of youth without sharing with us the recipie?! | 18:50 |
| bb|hcb | About servers - if it is a single machine, definitely go for raid1 on the boot and os drives. A single drive means downtime WHEN it fails (not IF). | 18:54 |
| bb|hcb | SSDs over SATA/SAS are slow; spinning rust (HDD) are even slower. But going for NVMe means that you need a hot-swap enclosure, because replacing a failed internal NVMe also means downtime. | 18:57 |
| bb|hcb | The problem with NVMe hot-swap is the high cost; I'd rather compromise for slower but hot-swappable drives | 18:58 |
| bb|hcb | About RAID - I always go for RAID1, the 5/6 are two slow in recovery. RAID1 allows to do vendor diversity - chances for two SSDs from the same vendor/model/batch to fail simultaneously are much higher... | 19:04 |
| joerg | vendor diversity - I failed on that | 19:33 |
| joerg | but then my Seagate Constellations are fine after 12 years of 24/7 | 19:34 |
| joerg | and I am waiting every day for screeaak screeaak >>/dev/sdb dead; /dev/sdc dead<< | 19:35 |
| joerg | a hint regarding choosing drives: when I planned to shop 2 enterprise drives I had a website of a large data center or the like that published reports about which drives of the millions of spinning rust had how many failures during live time. Alas I can't find the URL anymore | 19:41 |
| joerg | also looking for MTBF in dadashits | 19:42 |
| joerg | though that could be cheating | 19:43 |
| joerg | for seahate constellations they corrected their MTBF in dadasheets from initially 1m to 2m hours iirc | 19:44 |
| joerg | :-) | 19:44 |
| joerg | ooh, -dev. I'm inattentive to channel names lately. Sorry | 19:48 |
| joerg | anyway hotswap is the gold standard but I'm happy with redundancy that allows me to shedule downtime and keep it <<1h since the needed parts already arrived from seller. Then open cabinet, shut down swap drive, restart. 10 minutes downtime. Never had to do it, I had to swap PSU though, TWICE already, and I got no redundant PSUs in that "server" | 19:55 |
| joerg | and a Y-cable for power that distributed power from PSU to the two constellation drives. :-) 50ct component, gave me terrible headaches since it caused both drives to power down and spin up on random and not in sync. Responsible for ca half or 2/3 of the power cycles on those drives | 20:10 |
| joerg | I wonder how manuf evaluates/determines 2 million hours MTBF (228 years) | 20:15 |
| joerg | statistics voodoo | 20:16 |
| joerg | 1k drives 1 year stresstest | 20:18 |
| golinux | The certificate for mailinglists.dyne.org expired on 5/30/2025. | 23:18 |
Generated by irclog2html.py 2.17.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!