disabling webrtc temporarily
To facilitate a proxy setting to prevent IP address leakage, disable WebRTC with about:config
setting:
media.peerconnection.enabled = false
You can check with https://whoer.net
Preserving for the future: Shell scripts, AoC, and more
To facilitate a proxy setting to prevent IP address leakage, disable WebRTC with about:config
setting:
media.peerconnection.enabled = false
You can check with https://whoer.net
My current setup for Radicale and Infcloud for my web interface to my calendars depends on ldap authentication at the reverse-proxy level.
I hacked the frontend in my branch of infcloud to use the browser localStorage javascript/devtools feature so I don't have to enter my username and password every time. Yes, it's insecure, and yes, I don't care.
Using my web calendar works very well. However, when I go to download an event (usually to email it to someone to invite them), I get prompted with the browser basic auth prompt. So I got tired of having to do that, at least the first time in every session, and I wanted to find a better way. I use kerberos (gssapi) authentication in other places on my web server, and I wanted to bring that here.
So I spent a bunch of time experimenting, and I learned that I didn't need to change the infcloud or radicale apps at all! Configuration of the apache httpd reverse proxy, and also my radicale rights file was all I needed.
I only needed to change one line in my main config file: which auth.cnf file to include:
RewriteEngine On RewriteRule ^/radicale$ /radicale/ [R,L] <Location "/radicale/"> ProxyPreserveHost On Include conf.d/auth-gssapi.cnf Require valid-user AuthName "GSSAPI protected" ProxyPass http://localhost:5232/ retry=20 connectiontimeout=300 timeout=300 ProxyPassReverse http://localhost:5232/ RequestHeader set X-Script-Name /radicale </Location>
I added entries to auth-gssapi.cnf, which was mostly complete.
AuthType GSSAPI GssapiUseSessions On Session On SessionCookieName s1_session path=/; GssapiCredStore keytab:/etc/httpd/keytab GssapiCredStore ccache:/etc/httpd/krb5.cache SessionHeader S1SESSION GssapiSessionKey file:/etc/httpd/gssapisession.key GssapiImpersonate On GssapiDelegCcacheDir /run/httpd/ccache GssapiDelegCcachePerms mode:0660 gid:apache GssapiUseS4U2Proxy On GssapiAllowedMech krb5 GssapiBasicAuth On GssapiBasicAuthMech krb5 GssapiLocalName On GssapiNameAttributes json AuthBasicProvider ldap AuthLDAPGroupAttribute member AuthLDAPSubGroupClass group AuthLDAPGroupAttributeIsDN On AuthLDAPURL "ldaps://dns1.ipa.internal.com:636 dns2.ipa.internal.com:636/cn=users,cn=accounts,dc=ipa,dc=internal,dc=com?uid,memberof,gecos?sub?(objectClass=person)" #GSS_NAME returns username@IPA.EXAMPLE.COM which merely needs additional rules in /etc/radicale/rights RequestHeader set X_REMOTE_USER "%{GSS_NAME}e" # Does not work #RequestHeader set X_GROUPS "%{AUTHENTICATE_memberOf}e" # mostly useless values #RequestHeader set X_REMOTE_GSS "%{GSS_NAME_ATTRS_JSON}e"
My radicale setup uses /etc/radicale/rights
to define the ACLS. The examples in the file are very useful. I merely needed to repeat entries and add the domain name.
# default, which was already here [principal] user: .+ collection: {user} permissions: RW # new entry [principal-domain] user: (.+)@IPA.INTERNAL.COM collection: {0} permissions: RW # default [calendars] user: .+ collection: {user}/[^/]+ permissions: rw # new entry [calendars-domain] user: (.+)@IPA.INTERNAL.COM collection: {0}/[^/]+ permissions: rw # Specific calendars [user8-read-bgstack15-1] user: user8 collection: bgstack15 permissions: R [user8-read-bgstack15-2] user: user8 collection: bgstack15/c86bcd9f-7526-8083-ca5c-c68bc664ae03 permissions: rwi # new entries [user8-read-bgstack15-1-domain] user: user8@IPA.INTERNAL.COM collection: bgstack15 permissionsS: R [user8-read-bgstack15-2-domain] user: user8@IPA.INTERNAL.COM collection: bgstack15/c86bcd9f-7526-8083-ca5c-c68bc664ae03 permissions: rwi
I find it worth duplicating entries, to accomplish my goal of being able to seamlessly download calendar events in my browser.
Fetched ffmpeg spec from Chinforpms pasture, fixed some rpm macros, compiled it, and installed it.
I use Jellyfin server on CentOS 7, which I install from a local yum repository after downloading the correct files from the upstream page. I recently upgraded from 10.7.6 to 10.8.9 and media playback broke for almost all video content.
To start the process, I made a backup of the data after stopping jellyfin.
# systemctl stop jellyfin # tar -C /var/lib -zcf /tmp/jellyfin.$( date "+%F" ).$( rpm -qf ).tgz jellyfin
Then I yum updated and started jellyfin.
The problem was indicated in the server logs (journalctl -n500 -f -u jellyfin
): Jellyfin needs the path set for ffmpeg. That's slightly believable. I ensured I had /usr/bin/ffmpeg
, and set that as the path in the web admin console, but the web ui indicated it could not find ffmpeg. The server logs indicated specifically that it could not find ffmpeg 4, so it was a version problem. I realize CentOS 7 is now 9 years old; vm4's hardware RAID controller does not have a driver for the Linux kernel used by RHEL8 and above, at least at the time I installed that system.
Turns out rpmfusion only has ffmpeg version 3. I spent time looking for a way to get ffmpeg >= 4 on CentOS 7. I first investigated the famous negativo17 repository. I have used it in the past, with minor problems as some of the packages want to own the same files from other packages in the OS. negativo17's "epel-multimedia" repository has ffmpeg 5 in it!
I wanted to find another way to get ffmpeg without having to use that repository, if I could. I can use that repository if I have to. I checked another Fedora custom repackager that I follow, PhantomX, who did indeed package ffmpeg in the past! In fact, PhantomX had stopped about 3 years ago at version 4.2.2, which was perfect! This spec file needed only minor adjustments for rpm macros that do not exist in the version of rpm in CentOS 7 (PhantomX targets Fedora, and 3 years ago would have been Fedora 32 approximately), and thankfully all the dependencies were in rpmfusion or the base repositories.
So I manually built this ffmpeg 4 rpm on a CentOS 7 build node (I needed too), and then deployed the files to my local yum repository, and ran yum upgrade ffmpeg
. And then I could set the path to the program in the Jellyfin web ui, and then my users can view videos again!
--- ffmpeg.spec 2023-02-26 12:12:22.250386837 -0500 +++ ffmpeg.spec.new 2023-02-25 22:45:15.043571569 -0500 @@ -4,6 +4,15 @@ #global date 20180419 #global rel rc1 +%define ldconfig_post(n:) %{?ldconfig:%post -p %ldconfig %{?*} %{-n:-n %{-n*}} +%end} +%define ldconfig_postun(n:) %{?ldconfig:%postun -p %ldconfig %{?*} %{-n:-n %{-n*}} +%end} +%define ldconfig_scriptlets(n:) %{?ldconfig: +%ldconfig_post %{?*} %{-n:-n %{-n*}} +%ldconfig_postun %{?*} %{-n:-n %{-n*}} +} + # Cuda and others are only available on some arches %global cuda_arches x86_64
Probably eventually, I will need ffmpeg 5. Perhaps I just won't update Jellyfin again. It does exactly what I want already, and upgrading it now didn't even make anything better.
I needed to update my webhook template for new field names. Oh, and the app server needed to be restarted a few times to iterate through fetching the newer version of the webhook plugin, and enabling it, and using it, but it was painless.
All srpms and rpms in /mnt/public/www/internal/repo/rpm/jellyfin-ffmpeg/
.
From the apt-listchanges message for this month:
openssh (1:9.2p1-1) unstable; urgency=medium OpenSSH 9.2 includes a number of changes that may affect existing configurations: * ssh(1): add a new EnableEscapeCommandline ssh_config(5) option that controls whether the client-side ~C escape sequence that provides a command-line is available. Among other things, the ~C command-line could be used to add additional port-forwards at runtime. This option defaults to "no", disabling the ~C command-line that was previously enabled by default. Turning off the command-line allows platforms that support sandboxing of the ssh(1) client (currently only OpenBSD) to use a stricter default sandbox policy. -- Colin Watson <cjwatson@debian.org> Wed, 08 Feb 2023 10:36:06 +0000
So I had to run this on all Devuan systems, to keep my ssh client command-line enabled.
sudo updateval -a -v /etc/ssh/ssh_config '^\s*EnableEscapeCommandline.*' 'EnableEscapeCommandline yes'
Where updateval is from my bgscripts-core package.
Run this command.
ssh -D 8880 ${USER}@proxyserver -n -f -N
Set socks proxy in browser settings.
The overall goal is to have all dns requests possible go to my recursive servers.
$ dig -t NS ipa.internal.com ;; ANSWER SECTION: ipa.internal.com. 604800 IN NS dns2.ipa.internal.com. ipa.internal.com. 604800 IN NS dns1.ipa.internal.com. ;; ADDITIONAL SECTION: dns1.ipa.internal.com. 604800 IN A 192.168.1.50 dns2.ipa.internal.com. 604800 IN A 192.168.1.51
Dns3 host is a freeipa domain replica but does not have dns+dhcp on it as of 2023-02.
Just redirect all outbound dns requests to my dns servers. This is done by setting a command on router1.
DNS="192.168.1.50" iptables -t nat -I PREROUTING -i br0 -p udp --dport 53 -j DNAT --to "${DNS}:53" iptables -t nat -I PREROUTING -i br0 -p udp -s "${DNS}" --dport 53 -j ACCEPT test -f /jffs/doh-ipv4 && sh /jffs/doh-ipv4 test -f /jffs/doh-ipv6 && sh /jffs/doh-ipv6
Added this to the "firewall command" of the router, web ui -> tab Administration -> tab Commands.
I modified dns1 named.conf to include some logging of queries:
channel queries_log { file "/var/named/queries" versions 600 size 20m; print-time yes; print-category yes; print-severity yes; severity info; }; category queries { queries_log; };
Inside the logging{} section. Reference 6
This experiment was successful. On dns1, /var/named/queries shows the queries being submitted.
I grabbed a 128MB USB flash drive (yes, MB). I enabled usb support in the web ui: tab Services -> tab USB -> core USB Support is enabled, mount this partition to /jffs: 581af4db-8dfc-41af-9e8b-f612bd32508c
I also enabled jffs2 stuff in web ui: tab Administration -> tab Management -> section JFFS2 Support -> Intenal flash storage enabled
Some commands I ran on router1:
fdisk -l # i already had a partition on msdos label, but it was not formatted yet mkfs.ext4 /dev/sda1 modprobe ext4 mount /dev/sda1 /jffs
This appears to work persistently after reboots.
I set up the blocking script and run it on the dd-wrt router. goal: manually copy up the IPv4 (and IPv6?) servers to be blocked, add routing rules to disallow connections to those
echo '#!/bin/sh' > ~/doh-ipv4 for ip in $( <doh-ipv4.txt awk '{print $1}' ) ; do echo "iptables -I FORWARD -p tcp -d ${ip} --dport 443 -j REJECT --reject-with tcp-reset" ; done >> ~/doh-ipv4 # copy it to router1 <~/doh-ipv4 ssh root@router1 'cat > /jffs/doh-ipv4' ssh root@router1 chmod +x /jffs/doh-ipv4
echo '#!/bin/sh' > ~/doh-ipv6 for ip in $( <doh-ipv6.txt awk '{print $1}' ) ; do echo "ip6tables -I FORWARD -p tcp -d ${ip} --dport 443 -j REJECT --reject-with tcp-reset" ; done >> ~/doh-ipv6 # copy it to router1; scp was acting weird so use a stream <~/doh-ipv6 ssh root@router1 'cat > /jffs/doh-ipv6' ssh root@router1 chmod +x /jffs/doh-ipv6
I still need to set up a cron job script for doing all this automatically. For now, I have to run these steps manually. I suppose the script would pull the latest contents from the doh list git repo, generate the script, upload it, and optionally run it. I have not pondered how to prevent duplicate entries yet.
Just allow all dns traffic to outside, which loses control of my network.
I have a customized Photoprism instance. I need to document it more thoroughly, but this is the description of my backup process.
Of course it's in cron. File /etc/cron.d/70_photoprism_cron
includes line:
45 06 * * * photoprism /home/photoprism/photoprism/bup-pp-db.sh cron 1>/dev/null 2>&1
files/2023/02/listings/bup-pp-db.sh (Source)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 |
#!/bin/sh # File: bup-pp-db.sh # Locations: vm4:/home/photoprism/photoprism/ # Author: bgstack15 # Startdate: 2023-01-01-1 20:55 # Title: Photoprism backup script # Purpose: Make single command that bups Photoprism # History: # Usage: # in cron entry, probably in # Documentation: # In 2022-12, I moved the directories database/ and storage/ that Photoprism uses back to vm4 local filesystems. To back up the app, I just need to run this script with cron. # This process should only just over 1 minute # Dependencies: # photoprism user is in group docker. # docker-compose # /home/photoprism/photoprism/docker-compose.yml # gzip # Reference: # https://docs.photoprism.app/getting-started/advanced/backups/ # logging heavily inspired by photoprism/autoimport.sh workdir="$( dirname "$( readlink -f "${0}" 2>/dev/null )" 2>/dev/null || echo "${PWD}" )" #echo "workdir=${workdir}" test -z "${CONFFILE}" && CONFFILE="${workdir}/bup-pp-db.conf" test -e "${CONFFILE}" && . "${CONFFILE}" test -z "${LOGFILE}" && LOGFILE="/mnt/public/Support/Systems/vm4/var/log/bup-pp-db.$( date "+%F" ).log" main() { which lecho 1>/dev/null 2>&1 || alias lecho=echo lecho "START bup-pp-db.sh" test -z "${OUTDIR}" && OUTDIR=/mnt/public/Support/Systems/vm4/pp/photoprism echo "echo using OUTDIR=${OUTDIR}" test "${USER}" != "photoprism" && { echo "For best results, run ${0} as user photoprism. Pausing for 10 seconds..." 2>&1 ; sleep 10 ; } # test network mount test -w "${OUTDIR}" || { echo "Unable to write to path ${OUTDIR}. Is it mounted? Aborted." ; exit 1 ; } # Docker-compose must run from the directory where docker-compose.yml exists cd /home/photoprism/photoprism docker-compose exec -T photoprism photoprism backup -i - | gzip > "${OUTDIR:-/mnt/public/Support/Systems/vm4/pp/photoprism}/photoprism-db.sql.$( date "+%F" ).gz" lecho "STOP bup-pp-db.sh" } # Determine if this script was dot-sourced sourced=0 if [ -n "$ZSH_EVAL_CONTEXT" ]; then case $ZSH_EVAL_CONTEXT in *:file) sourced=1;; esac elif [ -n "$KSH_VERSION" ]; then [ "$(cd $(dirname -- $0) && pwd -P)/$(basename -- $0)" != "$(cd $(dirname -- ${.sh.file}) && pwd -P)/$(basename -- ${.sh.file})" ] && sourced=1 elif [ -n "$BASH_VERSION" ]; then (return 0 2>/dev/null) && sourced=1 else # All other shells: examine $0 for known shell binary filenames # Detects `sh` and `dash`; add additional shell filenames as needed. case ${0##*/} in sh|dash) sourced=1;; esac fi # So, if not dot-sourced, and this is run by cron, add logging if test $sourced -eq 0; then if echo " ${@} " | grep -q cron ; then main 2>&1 | plecho | tee -a "${LOGFILE}" printf '\n' | tee -a "${LOGFILE}" else main fi fi : |
The script refers to a conf file but I don't use one. It's there for easy modification of paths and so on without having to modify the script. Of course, since I'm backing up to a network directory, it checks for the network mount before running the backup command.
Most of the shell script is shell-based wrapping for the bare command
docker-compose exec -T photoprism photoprism backup -i - | gzip > $OUTPUTFILE
which is documented in the Photoprism docs.
To learn which repos need the info corrected:
cd ~/dev/sync-git/ for word in * ; do ( cd "${word}" ; git log --pretty=format:'%an,%ae' | sort -u | sed -r -e "s/^/${word},/;" ; ) ; done | grep 'localhost|example' ~/fix3
I inspected ~/fix3 and found two projects that had incorrect email addresses.
Now, prepare a script, ~/fix-git-commits.sh
In the two repositories that needed to have info corrected, run this script.
cd ~/dev/project1 sh ~/fix-git-commits.sh
Then, you can just force-push each branch to each remote.
I also had to manually re-tag each tag. I just matched exact commit time and manually ran:
git tag --force v2.1 330b3e91f357f54668c54d5e543a7f3138b77ad7
And then I force-pushed the tags to each remote.
git push --tags local --force git push --tags cloud --force
The goal of this document is to describe how the FreeIPA installation for ipa.internal.com was configured for automount.
The default location was used.
These steps were taken. It was very simple, once I knew the exact syntax of --info
, which includes the starting dash, filesystem mount options, space, and then nfs export name.
ipa automountmap-add-indirect default auto.net --mount=/net ipa automountkey-add default auto.net --key='*' --info="-fstype=nfs,rw,noatime,nosuid,rsize=1048576,wsize=1048576 server3:/var/server3/shares/&"
I had to follow the manual configuration steps documented by Red Hat, even after running the ipa-client-automount utility.
sudo apt-get install autofs sudo ipa-client-automount --location=default --unattended sudo updateval -v /etc/nsswitch.conf 'automount:.*' 'automount: sss files'
This has been turned into script ipa-client-automount.sh
.
files/2023/02/listings/ipa-client-automount.sh (Source)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
#!/bin/sh # File: ipa-client-automount.sh # Author: bgstack15 # Startdate: 2023-01-11-4 20:13 # SPDX-License-Identifier: GPL-3.0 # Title: Devuan ipa-client-automount helper # Purpose: # History: # Usage: # Reference: # https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/configuring-automount#Configuring_Automount-Configuring_autofs_on_Linux # Improve: # Documentation: # see also /mnt/public/Support/Systems/dns1/automount-for-mersey.md # the Red Hat docs describe how to do this manually. ipa-client-automount is supposed to do it all, but it does not (function modify_nsswitch_pam_stack from ipaplatform/base/tasks.py) # temp,2 for 2023-01 timeframe, I hope echo "deb [check-valid-until=no] http://snapshot.debian.org/archive/debian/20221001T092433Z/ unstable main contrib" | sudo tee /etc/apt/sources.list.d/snapshot.list sudo apt-get update sudo apt-get install autofs # temp,2 for 2023-01 timeframe, I hope sudo apt-get install python3-cryptography=3.4.8-2 sudo ipa-client-automount --location=default echo "${0}: updating nsswitch.conf because ipa-client-automount from package DOES NOT!" sudo updateval -a /etc/nsswitch.conf 'automount:.*' 'automount: sss files' #sudo service sssd restart # done as part of official ipa-client-automount sudo service autofs restart \ls -alF --color=always /net/public/Support # temp,3 for 2023-01 timeframe, I hope # because if ls was successful, we can comment out the snapshot archive test $? -eq 0 && sudo sed -i -r -e '/archive\/debian\/20221001T/s/^deb/#/;' /etc/apt/sources.list.d/snapshot.list |
Freeipa has the ability to show the equivalent file snippets.
$ ipa automountlocation-tofiles default /etc/auto.master: /- /etc/auto.direct /net /etc/auto.net --------------------------- /etc/auto.direct: --------------------------- /etc/auto.net: * -fstype=nfs,rw,noatime,nosuid,rsize=1048576,wsize=1048576 server3:/var/server3/shares/& maps not connected to /etc/auto.master:
Due to some python3 errors, the ipa-client-automount.sh script has a few extra steps in it for now to use snapshot.debian.org from 2022-10-01 and install python3-cryptography=3.4.8-2.
Documented by Red Hat, but apparently not required on my Devuan clients are these steps.
For Devuan, I tested with apt-get install autofs-ldap
but that seemed unnecessary.
Add to /etc/default/autofs
:
MAP_OBJECT_CLASS="automountMap" ENTRY_OBJECT_CLASS="automount" MAP_ATTRIBUTE="automountMapName" ENTRY_ATTRIBUTE="automountKey" VALUE_ATTRIBUTE="automountInformation" LDAP_URI="ldap:///dc=ipa,dc=internal,dc=com"
Modify file /etc/autofs_ldap_auth.conf
:
<?xml verison="1.0" ?> <autofs_ldap_sasl_conf usetls="no" tlsrequired="no" authrequired="yes" authtype="GSSAPI" clientprinc="host/d2-03a.ipa.internal.com@IPA.INTERNAL.COM" />
Recently, package fontconfig updated to 2.14.1 in Debian and therefore Devuan.
Some default font selections changed, which caused my fonts to be taller and have more line spacing. Affected applications and components include xfce4-terminal (yes, yes, on Fluxbox; user freedom and all that...), the fluxbox menu, and gtk menus.
After some fruitless Internet searches, I decided to ask the smart people in #devuan. A smart fellow said this came up in the Debian forum, which led me to a Google Groups discussion which has a workaround.
I adapted the workaround into a single script because I like to make a single fix a single command. I could have made this check before modifying the file, but whatever. It is still idempotent.