| leitz | Pointer to docs on installing Devuan 5 on aarch64? | 16:09 |
|---|---|---|
| rustyaxe | https://arm-files.devuan.org/ retrieve appropriate image; Enjoy :) | 16:23 |
| leitz | rustyaxe, thanks! | 16:36 |
| leitz | I'm trying to get the RPis and laptops all on the same distro version. | 16:36 |
| * rustyaxe nods | 16:36 | |
| rustyaxe | makes life easier | 16:37 |
| rustyaxe | So i will say | 16:37 |
| rustyaxe | If you're on a devuan host | 16:37 |
| rustyaxe | Save yourself a little hassle and setup qemu so you can chroot into the sdcard and set it up before deploying | 16:37 |
| leitz | Haven't ever done that, doc? | 16:38 |
| rustyaxe | apt install binfmt-support qemu-system-arm | 16:38 |
| rustyaxe | then you should be able to bind mount /dev into the mountpoint; lemme see if can find some doc on it | 16:38 |
| rustyaxe | if it doesnt Just Work when chrooting in, you'll need to setup the qemu binfmt bit; but it should be out of the box these days | 16:39 |
| rustyaxe | a command such as for i in /usr/lib/binfmt.d/qemu-*; do cat $i > /proc/sys/fs/binfmt_misc/register; done | 16:40 |
| rustyaxe | will temporarily load them | 16:40 |
| rustyaxe | https://forums.raspberrypi.com/viewtopic.php?t=233691 seems to be some older doc on it - makes sense as i find the trick couple years ago playing with qemu | 16:42 |
| rustyaxe | leitz: i dont think you need to copy the qemu binary; i dont recall having dne that step; try without it and if it doesnt work then proceed. | 17:07 |
| acak | This is a very quiet channel. | 23:09 |
| gnarface | yea, that's because our shit isn't always broken | 23:10 |
| gnarface | feel free to join #devuan-offtopic if you're bored | 23:10 |
| acak | All good, not looking for entertainment. Another Linux refugee just trying to get a sense of the long term viability of this project. Kicking the tires with the distro right now. | 23:12 |
| leitz | acak, I've been enjoying it for a bit. | 23:12 |
| leitz | I need to make a list of all my "want to" things in a distro, and make sure Devuan can do it. I think it can, but some of the odd stuff like webcam hasn't been tried. | 23:13 |
| acak | It has been a welcome relief to get away from systemd and bad GUIs, stuff I can't get away from on the day job. | 23:13 |
| leitz | For me the only real downer is the lack of first order arm support. Some folks are doing it, but it's not in the "main" supported list. | 23:14 |
| leitz | I heard systemd was going to replace sudo, that's...a hoot. | 23:15 |
| acak | Yeah, this embrace-and-extend strategy is getting ridiculous. Miss the old days, do one thing, and od it well. I always considered sudo a backdoor on its own, anyways. | 23:16 |
| leitz | sudo is a little better than people having root access, 'cause you can delete their account and they can't sudo anymore. | 23:19 |
| leitz | One place I worked used Puppet, and my last self-appointed operations task was to set up the puppet run to delete my account everywhere. :) | 23:19 |
| acak | With pam, at least, you can make sure people w/sudo privs get some kind of MFA, just don't like how it works out of the box. | 23:22 |
| acak | I'm a current puppet guy, but I don't use it for account management unless it's a mechanized account. | 23:23 |
| golinux | Maybe this conversation has morphed into something more suitable to #devuan-offtopic? | 23:24 |
| acak | Roger, that, I'll shut up. | 23:25 |
| leitz | Well, I need to head out anyway, so I'll drop my talk. acak, I think you'll be happy with devuan. | 23:26 |
| acak | I appreciate the input! | 23:26 |
| leitz | I tried void, a rolling release, and didn't care for the "rolling" bit. Devuan is stable, and I like that. | 23:26 |
| ted-ious | acak: The long term viability of devuan looks to me like it's at least as good as debian's. | 23:27 |
| systemdlete | gnarface, onefang: Not too much to update re apt-cacher-ng, but what I have noticed is that if I start getting those failures during apt update/upgrade, I can actually "correct" the problem (it seems) by running the maintenance on apt-cacher-ng, selecting the force download index files. | 23:49 |
| systemdlete | I'm thinking maybe this is not a mirror problem at all, but a problem with the cacher's own internal maintenance. | 23:50 |
| systemdlete | I am discovering (when I remember to do this!) that running the maintenance with the force index files download clears up the problem every time (at least that I can recall, so far). | 23:52 |
| gnarface | systemdlete: i think it's possible. somewhat related i tried to upgrade a very old beowulf VM to chimaera the other day and it derailed due to some error from apt-cacher-ng about bad redirects, so for the first time in a long time i ran the maintenance jobs over the cache, (having checked all the boxes and running it through multiple iterations) and it seemed to find a TON of stale paths the first few times, dwindling | 23:52 |
| gnarface | down until there were no more errors after about the 4th or 5th iteration... | 23:52 |
| gnarface | ... then when it ran out of stuff to complain about i tried the upgrade again and it went off without a hitch | 23:52 |
| systemdlete | yeah, I've gone through that exercise as well, gnarface | 23:52 |
| systemdlete | yep, yep. | 23:53 |
| systemdlete | exactly. | 23:53 |
| gnarface | i actually hadn't had to bother with doing that for years though | 23:53 |
| systemdlete | a regression (in the true sense of the word!) | 23:53 |
| gnarface | so i wonder if it's something particular about a specific sequence of updates to the mirrors that can cause these caches to get outdated in a particular way | 23:53 |
| gnarface | but it doesn't happen all the time, or only happens in some cases when the cache is very old ... not sure | 23:53 |
| gnarface | i suspect it's the type of thing that might need a cron job to actually properly address | 23:54 |
| systemdlete | When I was having the problem a few weeks ago while trying to upgrade a beowulf to chimaera (where it still stands; haven't gotten to deadalus yet, busy with some other things), I found that, ultimately, running the maintenance forcing the update of the cacher's index files did the trick. | 23:54 |
| systemdlete | That is my observation also. It does seem kind of intermittent. | 23:55 |
| systemdlete | but even a cron job might not be 100% effective, either. | 23:55 |
| gnarface | yea, the mystery is why it doesn't happen every time there's old files in there... it usually seems smart enough to clean up after itself without any handholding | 23:55 |
| systemdlete | (I mean, I thought of that also. But dependingn on exactly when the indexes become out of sync, that solution will be hit-or-miss) | 23:56 |
| gnarface | i actually wonder if it was something in particular that happened in beowulf repos at that time | 23:56 |
| gnarface | something that hasn't happened or at least hasn't happened very often since | 23:56 |
| systemdlete | I just experienced the problem with today's kernel upgrade | 23:56 |
| gnarface | oh, hmm | 23:56 |
| systemdlete | on daedalus, not beowulf! | 23:56 |
| gnarface | alright then | 23:56 |
| systemdlete | and this VM running daedalus was not upgraded from beowulf | 23:57 |
| systemdlete | (just saying) | 23:57 |
| gnarface | ok | 23:57 |
| systemdlete | (so we don't go off in the wrong direction with this) | 23:57 |
| gnarface | yea | 23:57 |
| gnarface | i suspect someone will have to really trace apt-cacher-ng to figure what it's doing wrong | 23:57 |
| gnarface | it's pretty old code after all, i don't think it's been getting updates | 23:58 |
| systemdlete | I think the ideal solution might be that when we run apt update locally on a client of the cacher, that it would trigger the cacher to check on its indices and take appropriate action then. | 23:58 |
| gnarface | see, i assumed it was doing that! | 23:58 |
| systemdlete | Unless it is already doing that... supposedly... | 23:58 |
| systemdlete | yea | 23:59 |
| gnarface | but maybe it's not always doing it, for some reason | 23:59 |
| systemdlete | Now, in my case, I've got about 6 or 7 different distros in my cacher, including openwrt and mx-linux | 23:59 |
Generated by irclog2html.py 2.17.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!