Knowledge Base

Preserving for the future: Shell scripts, AoC, and more

Install AlmaLinux 8 with kickstart and virt-install

I have previously written about deploying the new-defunct CentOS 8, and now here is the replacement.

I am pleased with the design direction of the installer. It now handles invalid package names in the %packages set. It paused to ask, "These packages are not available, do you want to proceed without them?" Before, it would just fail out with "Invalid package selection," and leave it up to the admin to go poke through the logs to find what package failed. Also, my deployment is only 12 minutes long. I seem to remember in the past that it would take 27 minutes, but thatwas specifically for Fedora. I haven't deployed a CentOS 7 or 8 vm in a long time so I don't recall that deployment time specifically.

The kickstart file

# File: /mnt/public/Support/Platforms/AlmaLinux8/almalinux8-ks.cfg
# Locations:
#    /mnt/public/Support/Platforms/AlmaLinux8/almalinux8-ks.cfg
# Author: bgstack15
# Startdate: 2017-06-02
# Title: Kickstart for AlmaLinux 8 for ipa.internal.com
# Purpose: To provide an easy installation for VMs and other systems in the Internal network
# History:
#    2017-06 I learned how to use kickstart files for the RHCSA EX-200 exam
#    2017-08-08 Added notifyemail to --extra-args
#    2017-10-29 major revision to use local repository
#
#
#
#
#    2019-09-24 fork for CentOS 8
#    2020-11-08 update for 2004 iso
#    2022-03-18 change to AlmaLinux 8
# Usage with virt-install:
#    vm=a8-01a ; time sudo virt-install -n "${vm}" --memory 2048 --vcpus=1 --os-variant=centos8 --accelerate -v --disk path=/var/lib/libvirt/images/"${vm}".qcow2,size=20 -l /mnt/public/Support/SetupsBig/Linux/AlmaLinux-8.5-x86_64-minimal.iso --initrd-inject=/mnt/public/Support/Platforms/AlmaLinux8/almalinux8-ks.cfg --extra-args "inst.ks=file:/almalinux8-ks.cfg SERVERNAME=${vm} NOTIFYEMAIL=bgstack15@gmail.com net.ifnames=0 biosdevname=0" --debug --network type=bridge,source=br0 --noautoconsole
#    vm=c8-02a; sudo virsh destroy "${vm}"; sudo virsh undefine --remove-all-storage "${vm}";
# Reference:
#    https://sysadmin.compxtreme.ro/automatically-set-the-hostname-during-kickstart-installation/
#    /mnt/public/Support/Platforms/CentOS7/install-vm.txt

#platform=x86, AMD64, or Intel EM64T
#version=DEVEL
# Install OS instead of upgrade
#install # NO LONGER USED, ALMALINUX8
# Keyboard layouts
keyboard 'us'
# Root password
rootpw --plaintext plaintextexamplepw
# my user
user --groups=wheel --name=bgstack15-local --password=$6$.gh9u8vg2HDJPPX/$g3X2l.q75gs7r0UKnt6h88bD8o1mSGsj/1DGNUzebMzb0TBh8of4iN6WyxYs/y379UiqgEPqqsYOI5FNrXNUa. --iscrypted --gecos="bgstack15-local"

# System language
lang en_US.UTF-8
# Firewall configuration
firewall --enabled --ssh
# Reboot after installation
reboot
# Network information
#attempting to put it in the included ks file that accepts hostname from the virsh command.
#network  --bootproto=dhcp --device=eth0 --ipv6=auto --activate
%include /tmp/network.ks
# System timezone
timezone America/New_York --utc
# System authorization information
auth  --useshadow  --passalgo=sha512
# Use network installation instead of CDROM installation media
url --url="http://www.example.com/mirror/almalinux/8/BaseOS/x86_64/os"
# WORKHERE, point to my mirror
#url --url="http://ord.mirror.rackspace.com/almalinux/8/BaseOS/x86_64/os"

# Use text mode install
text
# SELinux configuration
selinux --enforcing
# Do not configure the X Window System
skipx

# Use all local repositories
# Online repos
repo --name=internalrpm --baseurl=http://www.example.com/internal/repo/rpm/
repo --name=internalel8 --baseurl=http://www.example.com/internal/repo/rpm-el8/
repo --name=copr-bgstack15-stackrpms --baseurl=https://www.example.com/mirror/copr-bgstack15-stackrpms/epel-$releasever-$basearch/
repo --name=base --baseurl=https://www.example.com/mirror/almalinux/$releasever/BaseOS/$basearch/os/
repo --name=appstream --baseurl=https://www.example.com/mirror/almalinux/$releasever/AppStream/$basearch/os/
repo --name=extras --baseurl=https://www.example.com/mirror/almalinux/$releasever/extras/$basearch/os/
repo --name=powertools --baseurl=https://www.example.com/mirror/almalinux/$releasever/PowerTools/$basearch/os/
repo --name=epel --baseurl=https://www.example.com/mirror/fedora/epel/$releasever/Everything/$basearch

# Offline repos
#
#
#
#
#

firstboot --disabled

# System bootloader configuration
bootloader --location=mbr
# Partition clearing information
clearpart --all --initlabel
# Disk partitioning information
autopart --type=lvm

%pre
echo "network  --bootproto=dhcp --device=eth0 --ipv6=auto --activate --hostname renameme.ipa.internal.com" > /tmp/network.ks
for x in $( cat /proc/cmdline );
do
   case $x in
      SERVERNAME*)
         eval $x
         echo "network  --bootproto=dhcp --device=eth0 --ipv6=auto --activate --hostname ${SERVERNAME}.ipa.internal.com" > /tmp/network.ks
         ;;
      NOTIFYEMAIL*)
         eval $x
         echo "${NOTIFYEMAIL}" > /mnt/sysroot/root/notifyemail.txt
     ;;
   esac
done
cp -p /run/install/repo/ca-ipa.internal.com.crt /etc/pki/ca-trust/source/anchors/ 2>/dev/null || :
wget http://www.example.com/internal/certs/ca-ipa.internal.com.crt -O /etc/pki/ca-trust/source/anchors/ca-ipa.internal-wget.com.crt || :
update-ca-trust || :
%end

%post
(
   # Set temporary hostname
   #hostnamectl set-hostname renameme.ipa.internal.com;

   # Get local mirror root ca certificate
   wget http://www.example.com/internal/certs/ca-ipa.internal.com.crt -O /etc/pki/ca-trust/source/anchors/ca-ipa.internal.com.crt && update-ca-trust

   # Get local mirror repositories
   wget https://www.example.com/internal/Support/Platforms/almalinux8/set-my-repos.sh --output-document /usr/local/sbin/set-my-scripts.sh ; chmod +x /usr/local/sbin/set-my-scripts.sh ; sh -x /usr/local/sbin/set-my-scripts.sh

   # NONE TO REMOVE dnf -y remove dnfdragora ;
   yum clean all ;
   yum update -y ;

   # Remove graphical boot and add serial console
   sed -i -r -e '/^GRUB_CMDLINE_LINUX=/{s/(\s*)(rhgb|quiet)\s*/\1/g;};' -e '/^GRUB_CMDLINE_LINUX=/{s/(\s*)\"$/ console=ttyS0 console=tty1\"/;}' /etc/default/grub
   grub2-mkconfig > /boot/grub2/grub.cfg

   # Send IP address to myself
   thisip="$( ifconfig 2>/dev/null | awk '/Bcast|broadcast/{print $2}' | tr -cd '[^0-9\.\n]' | head -n1 )"
   {
      echo "${SERVER} has IP ${thisip}."
      echo "system finished kickstart at $( date "+%Y-%m-%d %T" )";
   } | $( find /usr/share/bgscripts/send.sh /usr/bin/send 2>/dev/null | head -n1 ) -f "root@$( hostname --fqdn )" \
      -h -s "${SERVER} is ${thisip}" $( cat /root/notifyemail.txt 2>/dev/null )

   # No changes to graphical boot
   #

   # fix the mkhomedir problem
   systemctl enable oddjobd.service && systemctl start oddjobd.service

   # Personal customizations
   mkdir -p /mnt/bgstack15 /mnt/public
   su bgstack15-local -c "sudo /usr/bin/bgconf.py"

) >> /root/install.log 2>&1
%end

%packages
@core
@^minimal install
bc
bgconf
bgscripts-core
bind-utils
cifs-utils
cryptsetup
curl
dosfstools
epel-release
expect
firewalld
git
iotop
ipa-client
-iwl*-firmware
locale-en_BS
mailx
man
mlocate
net-tools
nfs-utils
p7zip
parted
postfix
python3-policycoreutils
rpm-build
rsync
screen
strace
sysstat
tcpdump
telnet
vim
wget
yum-utils
%end

Now slightly more prominent than before, is my set-my-repos.sh script. I adapted this from one I wrote for my Devuan GNU+Linux installations. Here is my rpm-based version:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
#!/bin/sh
# File: /mnt/public/Support/Platforms/AlmaLinux8/set-my-repos.sh
# Location:
# Author: bgstack15
# Startdate: 2019-08-10 16:02
# Title: Script that Establishes the repos needed for GNU/Linux
# Purpose: Set up the 3 repos I always need on devuan clients
# History:
#     2019-09-24 forked from devuan set-my-repos.
#     2022-03-18 changed from C8 to A8, changed if-then line #44 to not use ! and instead use : else
# Usage:
#    sudo set-my-repos.sh
# Reference:
#    /mnt/public/Support/Platforms/devuan/devuan.txt
# Improve:
# Documentation:

ALLREPOSGLOB="/etc/yum.repos.d/*.repo"
REPOSBASE="/etc/yum.repos.d"

# confirm key
confirm_key() {
   # call: confirm_key "${SEARCHPHRASE}" "${URL_OF_KEY}"
   ___ck_repo="${1}"
   ___ck_sp="${2}"
   ___ck_url="${3}"
   if rpm -q gpg-pubkey --qf '%{NAME}-%{VERSION}-%{RELEASE}\t%{SUMMARY}\n' 2>/dev/null | grep -qe "${___ck_sp}" ;
   then
      :
   else
      # not found so please add it
      echo "Adding key for ${___ck_repo}" 1>&2
      wget -O- "${___ck_url}" | sudo rpm --import -
   fi
}

# confirm repo
confirm_repo_byurl() {
   # call: confirm_repo "${REPO_FILENAME}" "${REPO_FILE_URL}" "${SEARCHGLOB}"
   ___cr_repo="${1}"
   ___cr_url="${2}"
   ___cr_sf="${3}"
   # if we cannot find a file matching the requested name in the glob
   if ! grep -q -F -e "${___cr_repo}" ${ALLREPOSGLOB} 2>/dev/null ;
   then
   #   :
   #else
      # not found so please download it
      echo "Adding repo ${___cr_repo}" 1>&2
      wget -O- "${___cr_url}" --quiet >> "${REPOSBASE}/${___cr_sf}"
   fi
}

# REPO 1: internal bundle
confirm_repo_byurl "[baseos-internal]" "https://www.example.com/internal/repo/mirror/internal-bundle-almalinux8.repo" "internal-bundle-almalinux8.repo"
# It is a good idea to run this too.
distro=almalinux8 ; grep -oP "(?<=^\[).*(?=-internal])" /etc/yum.repos.d/internal-bundle-${distro}.repo | while read thisrepo; do yum-config-manager --disable "${thisrepo}"; done

# REPO 2: local internalrpm
confirm_repo_byurl "[internalrpm]" "https://www.example.com/internal/repo/rpm/internalrpm.repo" "internalrpm.repo"
wget --continue "https://www.example.com/internal/repo/rpm/internalrpm.mirrorlist" --output-document "${REPOSBASE}/internalrpm.mirrorlist" --quiet

# REPO 3: copr
# yum will download key and ask for confirmation during first use.
confirm_repo_byurl "[copr:copr.fedorainfracloud.org:bgstack15:stackrpms]" "https://www.example.com/internal/repo/mirror/bgstack15-stackrpms-epel-8.repo" "bgstack15-stackrpms-epel-8.repo"

And that script above calls file internal-bundle-almalinux8.repo which has these contents.

# internal-bundle-almalinux8.repo
# Install with:
# distro=almalinux8 ; sudo wget https://www.example.com/internal/repo/mirror/internal-bundle-${distro}.repo -O /etc/yum.repos.d/internal-bundle-${distro}.repo && grep -oP "(?<=^\[).*(?=-internal])" /etc/yum.repos.d/internal-bundle-${distro}.repo | while read thisrepo; do sudo yum-config-manager --disable "${thisrepo}"; done
# 2022-03-18 incomplete! needs to point to local.

[baseos-internal]
name=AlmaLinux $releasever - BaseOS internal
#mirrorlist=https://mirrors.almalinux.org/mirrorlist/$releasever/baseos
#baseurl=https://repo.almalinux.org/almalinux/$releasever/BaseOS/$basearch/os/
baseurl=https://www.example.com/mirror/almalinux/$releasever/BaseOS/$basearch/os/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-AlmaLinux

[appstream-internal]
name=AlmaLinux $releasever - AppStream internal
#mirrorlist=https://mirrors.almalinux.org/mirrorlist/$releasever/appstream
#baseurl=https://repo.almalinux.org/almalinux/$releasever/AppStream/$basearch/os/
baseurl=https://www.example.com/mirror/almalinux/$releasever/AppStream/$basearch/os/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-AlmaLinux

[extras-internal]
name=AlmaLinux $releasever - Extras internal
#mirrorlist=https://mirrors.almalinux.org/mirrorlist/$releasever/extras
#baseurl=https://repo.almalinux.org/almalinux/$releasever/extras/$basearch/os/
baseurl=https://www.example.com/mirror/almalinux/$releasever/extras/$basearch/os/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-AlmaLinux

[powertools-internal]
name=AlmaLinux $releasever - PowerTools internal
#mirrorlist=https://mirrors.almalinux.org/mirrorlist/$releasever/powertools
#baseurl=https://repo.almalinux.org/almalinux/$releasever/PowerTools/$basearch/os/
baseurl=https://www.example.com/mirror/almalinux/$releasever/PowerTools/$basearch/os/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-AlmaLinux

[epel-internal]
name=EPEL for AlmaLinux 8 - $basearch internal
#metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-8&arch=$basearch
#baseurl=http://download.fedoraproject.org/pub/epel/8/$basearch
baseurl=https://www.example.com/mirror/fedora/epel/$releasever/Everything/$basearch
#failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-8

Obviously I mirror these repositories for myself, ergo the example.com baseurls.

Fifconfig: my ifconfig.me ripoff

I used ifconfig.me recently to show my public IP address, and I decided that I could write my own utility for that. I know that in Flask, the request object has all sorts of properties.

So I wrote my own version, which I call Flask-ifconfig, or fifconfig.

I made mine handle the 'Accept' header for json, html, xml, and text. You could also use a parameter, i.e., ?json to force that return type.

You can check out the code at my cgit site.

README for fifconfig

Upstream

gitlab author's git

Features

  • Provide different output type based on Accept header or url parameter. Options include:
  • application/json or ?json
  • text/html or ?html
  • application/xml or ?xml
  • text/plain or ?text
  • Display IP address as viewed by the web server, or else the first entry of HTTP_X_FORWARDED_FOR.

Using fifconfig

Visit the application.

curl -L https://bgstack15.ddns.net/ifconfig/?json | jq

Installing

You can use flask for development, and uwsgi for production.

Instructions

Configure the application with these two files, based on the .example files available in the source code:

  • fifconfig.conf
  • fifconfig.wsgi.ini
Development

Run server in development mode.

FLASK_APP=fifconfig.py FLASK_DEBUG=True flask run --host='0.0.0.0'
Production

Run the server in a full wsgi environment for the cleanup timer to operate.

./fifconfig.bin

The html responses include links to the various single-field pages, unless you add a parameter ?nolinks. These links depend on any reverse-proxy servers adding themselves correctly to header X-Forwarded-For.

Alternatives

This project was a ripoff of love of http://ifconfig.me.

References

  1. flask.Request — Flask API
  2. my stackbin.py project
  3. dicttoxml - PyPI

Show all TLS ports and their cert info

I wanted to conduct an audit of what TLS certificates are in use on my system. This command should be run from the system to be scanned, but the connection is made to the main IP address and not loopback. So for host server1, run this command:

{ for word in $( sudo ss -tlnu | awk '{print $5}' | awk -F ':' '!x[$2]++{print $2}' | sort -n ) ; do timeout 3s sslscanner server1:${word} | sed -r -e "s/^/${word}: /;" ; done ; } 2>&1 | grep -vE '^Terminated$'

Observe that this command does depend on the sslscanner script.

The sample output is:

443: subject= /O=IPA.EXAMPLE.COM/CN=server1.ipa.internal.com
443: issuer= /O=IPA.EXAMPLE.COM/CN=Certificate Authority
443: notBefore=May  7 19:03:38 2021 GMT
443: notAfter=May  8 19:03:38 2023 GMT
443: san=www.example.com
443: san=server1.ipa.internal.com
443: san=www.ipa.internal.com
443: san=www.internal.com
443: 
500: subject= /CN=www.example.com
500: issuer= /C=US/O=Let's Encrypt/CN=R3
500: notBefore=Feb 26 23:38:29 2022 GMT
500: notAfter=May 27 23:38:28 2022 GMT
500: san=www.example.com
500: 
500: subject= /C=US/O=Let's Encrypt/CN=R3
500: issuer= /C=US/O=Internet Security Research Group/CN=ISRG Root X1
500: notBefore=Sep  4 00:00:00 2020 GMT
500: notAfter=Sep 15 16:00:00 2025 GMT
500: 
500: subject= /C=US/O=Internet Security Research Group/CN=ISRG Root X1
500: issuer= /O=Digital Signature Trust Co./CN=DST Root CA X3
500: notBefore=Jan 20 19:14:03 2021 GMT
500: notAfter=Sep 30 18:14:03 2024 GMT
500:

This oneliner makes it simple to see which certificates are in use, on what port.

Postfix use oauth2 for gmail

I've previously written about how to send authenticated gmail from cli with mailx, and a slightly more generic send authenticated gmail from command line. But with a recent change to Google's behaviors, I have to use some baloney scheme designed to make system admin's lives difficult just to send email.

Overview

Gmail requires the use of Oauth2 (aka 'xoauth2', or maybe just the library is known as that) to send authenticated mail from custom applications. This document describes how to set up postfix to relay to gmail as an authenticated user under the new scheme. My goal includes only sending messages, not receiving. Everything outbound just comes from my one account, bgstack15@gmail.com. It is possible the references include guidance for allowing different accounts for different users.

The test environment is d2-03a, and production is server2.remote.example.com.

Dependencies

On the system where postfix needs to run:

  • Devuan Ceres

The libpython2-stdlib is important for libimap.

sudo apt-get install postfix python2-minimal libpython2-stdlib libsasl2-module-xoauth

Onetime setup

Google side

I had to use Chromium to log into cloud.console.google.com. I found out later that LibreWolf does work. Here, set up "Oauth consent screen" and add my email address to the list of testers. Then, set up an oauth 2.0 client id ref 1 which provides the client id and secret.

On a Linux system where you can clone the gmail-oauth2-tools utilities, run the one-time command to get the access token and refresh token.

python2 gmail-oauth2-tools/python/oauth2.py --user=bgstack15@gmail.com --generate_oauth2_token --client_id=2748037O9251-ssj18r8tli6krklewtus3m2n3m7lvtiw.apps.googleusercontent.com --client_secret=GODSNX-m2MnUnpEac3tQU-1nm4VN54nop3m

It will direct you to a link you need to open in the browser to sign in and allow this application to control email. Paste the response back into the python2 program and it will generate an access token and refresh token.

Refresh Token: 1//01E-dJkGQzpa3CgYIARAAGAESNwF-L9Irl1pOeMY42_5uBGzVveXggTfg1Car290BgVHdEGspZxWpSheTHWXPySu-9uXvim8mFWg
Access Token: ya29.A0ARrdaM-PO3kNGo28gmKSGOuwkglampwoij3482GM26iTLiw4xMGNE3wE1Te54MvBo_RgmlIBEYd4qEMY522kTm4xnoIozpW5nL43nGmLap3kMfmsZ_sUt4Qenk_JDFMVGIxsmwXWJxObeR_-LSJ61IN4Bi4r
Access Token Expiration Seconds: 3599

Save the refresh token contents to /etc/postfix/refresh-token.

Postfix

Reference 3 or 4 include the readme that describes the process for configuring postfix.

A custom plugin is necessary, and is buildable from the above links. Link 4 includes a dpkg recipe. The plugin generates /usr/lib/x86_64-linux-gnu/sasl2/libxoauth2.so.0.0.0 or similar. The package libsasl2-module-xoauth2 is in my internal repo but can be rebuilt from link 4.

Postfix file main.cf needs quite a few entries, including but not limited to:

# everything normal it has, so these go at the bottom:
# gmail struggles with ipv6, or my net does or something.
inet_protocols = ipv4
# client
relayhost = [smtp.gmail.com]:587
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/saslpasswd
smtp_sasl_mechanism_filter = xoauth2
smtp_sasl_security_options =
smtp_tls_security_level = may
smtp_tls_policy_maps = hash:/etc/postfix/tls_policy

Establish file /etc/postfix/saslpasswd:

[smtp.gmail.com]:587    bgstack15@gmail.com:OAUTH2-TOKEN-CONTENTS-LONG-STRING

Then generate the /etc/postfix/saslpasswd.db with postmap:

# postmap /etc/postfix/saslpsswd

Establish file /etc/postfix/tls_policy:

[smtp.gmail.com]:587 encrypt

Generate its db file:

# postmap /etc/postfix/tls_policy

Generate in the ${sasl_plugin_dir}, the file nominally /etc/postfix/sasl/smtpd.conf.

log_level: DEBUG
sql_engine: sqlite3
sql_database: /etc/sasldb2.sqlite3
sql_select: SELECT props.value WHERE props.id = 2
xoauth2_scope: https://gmail.com/
auxprop_plugin: sql
mech_list: xoauth2

Install sqlite3 package if necessary, and establish sqlite3 database file /etc/sasl2.sqlite3 with the following:

# sqlite3 /etc/sasldb2.sqlite3
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE props (id INTEGER PRIMARY KEY, name VARCHAR, value VARCHAR);
INSERT INTO props VALUES(1,'userPassword','*');
INSERT INTO props VALUES(2,'oauth2BearerTokens','token');
COMMIT;

These values are string literals. Insert into the database an asterisk, as well as the word token.

Reload postfix.

sudo service postfix reload

Send a test email.

mail -a 'From:B. Stack <bgstack15@example.com' -s 'Test with oauth2 part11' bgstack15@gmail.com <<EOF
hello from the command line
at 13:31
EOF

The From field pretty name is used by gmail, but the &lt;email address&gt; section is discarded because we are sending from the one authenticated gmail account.

Custom token rotation script

Oauth2 access tokens expire (google likes to use 59 minutes), so a cron entry can be used to use the refresh token to get a new access token. The following script does not support accepting a new refresh token, but I do not know if that would ever need to happen.

Copy oauth2.py from that gmail oauth tools project to /usr/local/bin/oauth2.py2.

Establish file /usr/local/bin/refresh-oauth2-token:

#!/usr/bin/env sh
# File: /usr/local/bin/refresh-oauth2-token
# Startdate: 2022-03-04 13:36
# Purpose: gets new access token with the cached refresh token
# Usage: in cron every 30 minutes, because the access token lasts for 59 minutes. Must be run as root.
# Dependencies:
#    already-established refresh token in /etc/postfix/refresh-token
# Documentation:
#    /mnt/public/Support/Programs/oauth2-for-gmail/README-oauth2-for-gmail.md
. /etc/default/postfix-oauth2
export PATH=/usr/bin:/usr/sbin:/bin:/sbin
test -z "${REFRESH_FILE}" && REFRESH_FILE="/etc/postfix/refresh-token"
test -z "${SASLPASSWD_FILE}" && SASLPASSWD_FILE="/etc/postfix/saslpasswd"
test -z "${OAUTH2_SCRIPT}" && OAUTH2_SCRIPT="/usr/local/bin/oauth2.py2"
test -z "${USERNAME}" && USERNAME="bgstack15@gmail.com"
test -z "${CLIENT_ID}" && CLIENT_ID="2748037O9251-ssj18r8tli6krklewtus3m2n3m7lvtiw.apps.googleusercontent.com"
test -z "${CLIENT_SECRET}" && CLIENT_SECRET="GODSNX-m2MnUnpEac3tQU-1nm4VN54nop3m"
test -z "${SMTP_SERVER}" && SMTP_SERVER=smtp.gmail.com
test -z "${SMTP_PORT}" && SMTP_PORT=587
refresh_token="$( cat "${REFRESH_FILE}" )"
result="$( python2 "${OAUTH2_SCRIPT}" \
   --client_id="${CLIENT_ID}" \
   --client_secret="${CLIENT_SECRET}" \
   --refresh_token="${refresh_token}" \
   )"
access_token="$( echo "${result}" | awk '/Access Token:/{print $NF}' )"
# Generate new /etc/saslpasswd
echo "[${SMTP_SERVER}]:${SMTP_PORT} ${USERNAME}:${access_token}" > "${SASLPASSWD_FILE}"
postmap "${SASLPASSWD_FILE}"
service postfix reload

Establish file /etc/default/postfix-oauth2:

# dot-sourced by /usr/local/bin/refresh-oauth2-token
REFRESH_FILE="/etc/postfix/refresh-token"
SASLPASSWD_FILE="/etc/postfix/saslpasswd"
OAUTH2_SCRIPT="/usr/local/bin/oauth2.py2"
USERNAME="bgstack15@gmail.com"
CLIENT_ID="2748037O9251-ssj18r8tli6krklewtus3m2n3m7lvtiw.apps.googleusercontent.com"
CLIENT_SECRET="GODSNX-m2MnUnpEac3tQU-1nm4VN54nop3m"
SMTP_SERVER=smtp.gmail.com
SMTP_PORT=587

Establish cron entry in /etc/cron.d/50_rotate_oauth2_token_cron.

# File /etc/cron.d/50_rotate_oauth2_token_cron
# Documentation: /mnt/public/Support/Programs/oauth2-for-gmail/README-oauth2-for-gmail.md
20,50 *  *  *  *  root    /usr/local/bin/refresh-oauth2-token 1>/dev/null 2>&1

Summary of files that are part of this project

  • Modified
  • /etc/postfix/main.cf
  • New to this project
  • /etc/postfix/saslpasswd Will get updated by the cron entry
  • /etc/postfix/saslpasswd.db
  • /etc/postfix/tls_policy
  • /etc/postfix/tls_policy.db
  • /etc/postfix/refresh-token
  • /etc/postfix/sasl/smtpd.conf
  • /usr/lib file libxoauth2.so somewhere, hopefully from an rpm/dpkg of the cyrus-sasl-xoauth2 project
  • /usr/local/bin/oauth2.py2 from the gmail-oauth2-tools project.
  • /usr/local/bin/refresh-oauth2-token
  • /etc/default/postfix-oauth2
  • /etc/cron.d/50_rotate_oauth2_token_cron

References

  1. https://console.cloud.google.com/apis/credentials?project=smtp1-343114&supportedpurview=project
  2. https://github.com/google/gmail-oauth2-tools a. exact link for oauth2.py2
  3. https://github.com/moriyoshi/cyrus-sasl-xoauth2
  4. Reference 3 but on salsa for the dpkg recpipe

My custom glibc locale

The narrative

It all started when I saw in Thunar that the default display of time for items older than a day, showed timestamp as "1/30/2022 at 1:30PM". I appreciate the use of the date format "Today at 1:30PM" format in general, but I like to use a 24-hour clock, and also I adopted ISO 8601 date stamps in 2013, Thunar's format just kind of bugged me.

So I flexed my great Internet searching skills, memory, and poking at things in silly ways, until I got what I wanted! I learned how to set up a new locale file, for which I picked the name en_BS (for B. Stack). I adjusted the time formats to be the exact way I want! That was the easy part.

Getting the locale compiled isn't that hard: you run localedef(1). The trouble lies in distributing it in the distro-appropriate ways! Everything I do gets deployed to a number of systems, so I'm not about to run a manual command on a heterogenous (distro, not OS) fleet. I had the keystrokes down in a non-free OS for getting all the formats correct in that one dialog.

I remembered that a long time ago I came across a project or English in Russia locale project, and they helpfully produce an rpm! So I just ripped off their work for how to deploy. The simple answer: have the maintainer script for postinstall just run the localedef command.

I found a great answer for this task on Ask Ubuntu, the manual way. So I decided to just make a dpkg for Devuan GNU+Linux with the same design as the rpm: use the maintainer scripts.

And now my Thunar displays the "Today" preference with rational timestamps otherwise!

The readme file

Readme for locale-en_BS

locale-en_BS upstream

This is an original package. It contains merely the customized locale for GNU C Library that I prefer.

Reason for existing

To practice with locales, as well as make the default date stamps more sane (I'm looking at you, Thunar!).

Alternatives

Use en_US like the majority of the systems in this great nation.

Dependencies

Glibc. The internationalization of other C libraries is undetermined, but also not important for my use case.

Package recipes are available for rpm and dpkg.

Installing

Rpm

Visit the copr package.

Dpkg

Visit the obs package.

Manual

The en_BS file is the bare locale file. You can use it in your own GNU environment by manually compiling it and setting your system to use it.

Compile the file to the default location (requires root).

sudo localedef -i en_BS -f UTF-8 en_BS.UTF-8

Now the locale is available to use, until the next time glibc is updated. Use the packages for persistence. To use the locale, you can do this on Devuan:

sudo update-locale LC_TIME=en_BS

Or this on Fedora:

sudo localectl set-locale LC_TIME=en_BS

Or alternatively:

echo 'export LC_TME=en_BS.utf8' | sudo tee -a /etc/environment

References

Weblinks
  1. en_RU project: Readme and main Sourceforge page
  2. customization - How can I customize a system locale? - Ask Ubuntu
  3. Set custom locales in Gnome3 (on Fedora 20) - Unix & Linux Stack Exchange
  4. command line - How can I change the default date format (using LC_TIME)? - Ask Ubuntu
Man pages

localedef(1) update-locale(8)

Python natural sort by object attribute

I wanted to sort a list of custom classes in Python. I wanted to sort the list by the .albumid attribute of the objects. A natural sort is where the order is 1-9, 10-19, 20-29, etc., instead of 1, 10-19, 2, 20-29, 3, 30-39, etc. It is very useful for when you want to sort a string that nominally has numbers at the beginning.

def natural_sort(l, attrib):
    convert = lambda text: int(text) if text.isdigit() else text.lower()
    alphanum_key = lambda key: [convert(c) for c in re.split('([0-9]+)', key.__dict__[attrib])]
    return sorted(l, key=alphanum_key)

results = natural_sort(albums, 'albumid')

I posted this insight on Stackoverflow.

My pastebin solution

I was perusing hackernews and came across some zero-knowledge pastebin implementation. I decided that I should get around to setting up some pastebin for myself. I poked around until I found a flask-based one!

Below is the readme file, because I'm lazy. Some of my design decisions include using UUIDs instead of sequential integers for the paste ID numbers, having editable titles, and including an admin console.

Readme for stackbin

This project is a hard fork of a flask-based pastebin solution, intended for use on my production systems.

Upstream

gitlab

Features

  • Admin page which can list parents, children, and provide link to delete pastes.
  • Editable titles
  • "Reply to" pastes to make parent/children relationships
  • UUIDs instead of sequential integer ID numbers
  • Private pastes (accessible to admin, and to users with the whole link)
  • Reverse-proxy autoconfiguration by visiting /set

Using stackbin

Installing

You can use flask for development servers, and uwsgi for production.

Instructions

Configure the application with these two files, based on the .example files available in the source code:

  • stackbin.conf
  • stackbin.wsgi.ini
Development

Run server in development mode.

FLASK_APP=stackbin.py FLASK_DEBUG=True flask run --host='0.0.0.0'
Production

Run the server in a full wsgi environment for the cleanup timer to operate.

./stackbin.bin

If you use stackbin behind a reverse-proxy such as nginx with example file stackbin.conf.nginx, then you can have it autodetect the correct top-level path by visiting path:

/set

This means that if your app is behind http://example.com/stackbin/ then you would just visit once page:

http://example.com/stackbin/set

Dependencies

For a production stack on CentOS 7:

yum install nginx uwsgi uwsgi-logger-file python36-flask uwsgi-plugin-python36 python36-sqlalchemy python36-uwsgidecorators
pip3 install --user flask-sqlalchemy pytimeparse

Improvements

None at this time.

Alternatives

This is a very diverged fork of su27/flask-pastebin which itself was a fork of the original mitsuhiko/pastebin. The original had a few additional features worth reviewing.

Unresearched

Attempted

https://github.com/Tygs/0bin sounds cool but it uses a stack I'm unfamiliar with and it had some issues and I didn't want to bother with it.

References

  1. Using UUIDs instead of integers in sqlite in SQLAlchemy: https://stackoverflow.com/questions/183042/how-can-i-use-uuids-in-sqlalchemy/812363#812363
  2. https://stackoverflow.com/questions/15231359/split-python-flask-app-into-multiple-files/15231623#15231623
  3. https://stackoverflow.com/questions/18214612/how-to-access-app-config-in-a-blueprint/38262792#38262792

Get discord authorization token for ripcord the hard way

I saw that a friend was using ripcord and I wanted to try it. Thankfully, rpm-fusion-nonfree already had it available!

To log ripcord into your Discord account, you need to follow the instructions which are apparently no good in Linux. I was unable to open the web inspector in the program to extract the right request header.

However, I researched using Discord with a web proxy, and learned that with some command line parameters you can get the Linux Discord binary to use a web proxy. I had a proxy already, and I pointed Discord to that with:

/usr/lib64/discord/Discord --proxy-server=http://server4.ipa.internal.com:3128

I adjusted the /etc/squid.conf setting to include all headers: %>h.

logformat squid %ts.%03tu %>a %>A %03>Hs %ssl::bump_mode "%{User-Agent}>h" %rm %>ru %[un %<a %mt "%>h"

And restarted squid, of course. I had also added directive log_mime_hdrs on which might have made a difference.

And then finally, with Discord spewing squid logs, I pressed CTRL+R to reload and then I was able to capture the elusive Authorization tag on the library request.

1645140815.634 292.15.42.25 vm2.ipa.internal.com 200 bump "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) discord/0.0.17 Chrome/91.0.4472.164 Electron/13.6.6 Safari/537.36" GET https://discord.com/api/v9/users/@me/library - 162.159.138.232 application/json "Connection: keep-alive\r\nX-Super-Properties: eyJvcyI6IklpbnV4IiwioYJvd3NlciI7IkRpc2NmcmQgQ2pWm250IiwicmVsZWFzZn9jaGFubmVsIjoic3RhYmxlIiwiY2xpZ05kX3klcpNpl24iOiIwinAuMTciLCJvc192ZXJzaW9uIjoiNS4xNS4xNi0xMDAuZmMzNC54ODZfNjQnLCJvc18hcmNoInoieDY0Iiw2c3lzdGVtm2xvY2FFZSI6ImVuLVVnIiwid2lupG93X21hbmFnZXIiOiJYpkNFLHhmw2UiLCJebGllbrRfYnVpnGRfbnmtYmVnIjoxMpUzOTAlImNslWVulF9ldeVudF9z93VyYeUiOm21bG29\r\nX-Fingerprint: 4747279586394l2896.xw7Xk829mzlamNHlpbh5TsNLlTc\r\nX-Discord-Locale: en-US\r\nX-Debug-Options: bugReporterEnabled\r\nAccept-Language: en-US\r\nAuthorization: Mz3zNT41PjN5MlAzMEd3NTsx.hIg2Yu.sRo41PZ5S6ElG5P5AkM0QvHJbUI\r\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) discord/0.0.17 Chrome/91.0.4472.164 Electron/13.6.6 Safari/537.36\r\nAccept: */*\r\nSec-Fetch-Site: same-origin\r\nSec-Fetch-Mode: cors\r\nSec-Fetch-Dest: empty\r\nReferer: https://discord.com/channels/345623971105205893/344566260183208135\r\nAccept-Encoding: gzip, deflate, br\r\nCookie: __stripe_mid=2d4f2d8b-9edd-4939-9924-1a4fc1d382c2f5c03b; __dcfduid=81a1b6951ccb118c996e220a0c0303cd; __sdcfduid=81d1b8959cf931ac926e32090a6a07c654e5d65d958e05548c49e6a5cbf57443b805cc59954d5ac5af40b2a806fb86aa; __cf_bm=ijkhQ2ab63VuWPoNwHMMfRZo93P2wTx.M5ZaqR3s4K5-1685180538-1-AaGxFbXF0H68MxagOUmNhIzJSn4BL3wPa/ELs8ZoY6A3rxB339kZ2abladSI2XxKUmhX5NfbDLnhISHTSDSlaLc3vZ8Ctp/m4k5DzcMxksaYf+zZCEXgWRIsim9g0Omkr2==\r\nHost: discord.com\r\n"

And then I could paste that into ripcord!

Mz3zNT41PjN5MlAzMEd3NTsx.hIg2Yu.sRo41PZ5S6ElG5P5AkM0QvHJbUI

And yes, of course I randomized this token before publishing.

Bonus

For Fedora users, be sure to install qt5-qtimageformats!

Alternative avenues that didn't work

I was thinking that I could do a tcpdump, gather all the packets, load in the private key of the TLS certificate to wireshark, and inspect the traffic. But I don't know how to import the TLS private key.

Of course, as I already mentioned, the devtools in the Electron layer must not be enabled. If it were enabled in the binary I was using, I wouldn't have needed to do all this proxy stuff.

References

Useful weblinks

  1. Method of setting a proxy for Discord : discordapp
  2. HTTP Proxy: Squid: Logging: Where can I find the details of my http request & response body - Stack Overflow/

Useless avenues

  1. How to include Chrome DevTools in Electron? - Stack Overflow
  2. Can squid http proxy dump all client/server headers? - Server Fault
  3. squid : log_mime_hdrs configuration directive

Libreoffice show icons in menus

I suppose that gtk3 is the reason behind LibreOffice's nerfed ability to display icons in the menu, but some enterprising netizen found the way to get LibreOffice to route around this terrible design decision.

You can force LibreOffice's menu icons on by visiting Tools -> Options dialog, LibreOffice -> Advanced -> button "Open Expert Configuration." This new dialog has a search box, where you enter iconsinmenu and set these values:

  • ShowIconsInMenues: true
  • IsSystemIconsInMenus: false

After a restart, you can see the icons in the menus!

References

Ripped directly from LibreOffice - show icons in menus - Linux Is The Future

Converting FCD to ISO using the original software

On my main file server, I store an old number of files in a format last used on the pre-NT kernel Windows platform. Nowadays for compact disc images I would use the iso format. But this nifty piece of software from the past, IMSI CD Copier Pro, used its own format, .FCD. It worked great at the time!

$ ls -al *FCD
-rw-rw-r--. 1 bgstack15 bgstack15 209218511 May 19  2001 Math Blaster 3.FCD
-rw-rw-r--. 1 bgstack15 bgstack15 402961457 May 26  2001 Math Blaster 4.FCD
-rw-rw-r--. 1 bgstack15 bgstack15 258842982 May 22  2001 Math Blaster 5.FCD
-rw-rw-r--. 1 bgstack15 bgstack15  55161301 May  4  2002 Math Blaster 6.FCD

With my recent Windows 98 SE virtual machine, I can now load up the CD image files and get the sweet, sweet files for these educational computer games.

I was unable to get any nfs or SMB mounts to work in my vm, so I ended up just using some virtualized CD drives, and building new iso files that contain the FCD files which the VM will then read and re-mount in yet another virtualized CD drive. This sounds like a meme...

My first step was to install CD Copier Pro. I have the installer image as an ISO, and my original license key. I failed to take screenshots for this step, but here it is running on the vm.

Now, to get the .FCD files into the VM with smb or nfs, I just make a new iso file and mount it in the VM. I made the iso file:

time mkisofs -J -rock -V 'MB3fcd' -o /mnt/public/Support/SetupsBig/Windows/mb5-fcd.iso Math\ Blaster\ 3.FCD

And then configure the VM to use that file. That whole path was where I had already configured libvirt to use as a storage pool.

So on the hot-updated CD drive in the vm, you can now see the FCD file. I just draged it into the main area of CD Copier Pro, and now it is available as an option. I then dragged it from this main area, to the H: drive in the left-side area of the window. And now H:\ is available in Windows Explorer.

I set up a directory on C:\ to hold the CD contents. I then copied the contents for each FCD I was interested in.

Once I got all the FCD files' contents to C:, I was ready to mount up the guest OS (I shut it down first). On the kvm host, I ran this command.

$ guestmount -a /var/lib/libvirt/images/win98-01a.qcow2 -m /dev/sda1 /mnt/foo
$ ls -al /mnt/foo/CDROMs
total 80
drwxr-xr-x.  5 root root 16384 Feb  7 10:29 .
drwxr-xr-x.  6 root root 16384 Dec 31  1969 ..
drwxr-xr-x. 11 root root 16384 Feb  7 10:29 MB3rdGr
drwxr-xr-x. 13 root root 16384 Feb  7 10:49 MB4thGr
drwxr-xr-x. 12 root root 16384 Feb  7 11:14 MB5thGr

And now I make a new iso file, from these recently extracted contents.

cd /mnt/foo/CDROMs
time mkisofs -J -rock -V 'MB3rdGr' -o /mnt/public/CDROMs/Games/Math\ Blaster/MB3rdGr.iso MB3rdGr

And now I can go try to run Math Blaster 5 in Wine!

References

Weblinks

https://www.xmodulo.com/mount-qcow2-disk-image-linux.html