Knowledge Base

Preserving for the future: Shell scripts, AoC, and more

Ipa sudorule all commands

It was not the most clear to me how to write a sudo rule with "ALL" as the command set. I'm sure this was documented somewhere offline or on the Internet. Here's my cheat sheet for next time.

To grant user3 access to full sudo access on host server2:

ipa sudorule-add 'user3-server2-root'
ipa sudorule-add-host 'user3-server2-root' --hosts server2
ipa sudorule-add-user 'user3-server2-root' --users 'user3'
ipa sudorule-add-runasuser 'user3-server2-root' --users 'root'
ipa sudorule-mod 'user3-server2-root' --cmdcat='all'
ipa sudorule-add-option 'user3-server2-root' --sudooption '!authenticate'

The big deal is the --cmdcat which is short for command category. So instead of listing specific commands, it is the "ALL" equivalent.

Normalizing my contact data

Now that my address book is in my self-hosted Caldav+Carddav solution, I wanted to update how I displayed the contacts in my address book. InfCloud has some defaults, and I wanted to change them.

I store my contacts with Google Voice numbers labeled as such, and displaying these in InfCloud's main display is possible if you normalize the data. I had used various custom names for that phone field, such as "google voice" and "GrandCentral."

Inside my collection directory, I ran this command.

sed -i -r -e '/ABLABEL:/{s/google ?voice|grand ?central/googlevoice/gi;}' *.vcf

And then I set this variable in my config.js.

var globalCollectionDisplay = [
   { label: '{Name}', value: ['{LastName}, ','{FirstName}'] },
   { label: '{Email}', value: ['{Email[:0]}'] },
   { label: '{Email} 2', value: ['{Email[:1]}'] },
   { label: '{Phone} 1', value: ['{Phone[:0]}'] },
   { label: 'GV', value: ['{Phone[type=googlevoice]}'] }
];

Flash PERC for drive passthrough

I recently purchased a used Dell PowerEdge T320 with a PERC H710 raid controller. This raid controller model does not provide drive pass-through as an option in the firmware, but I had chosen my hardware carefully because there is a way to flash the raid controller to allow drive passthrough.

Following the process from References 2 and 3 was straightforward! I will replicate the instructions here in abbreviated form in case upstream dies.

Instructions from fohdeesha.com

Ensure there is only one LSI-based adapter in your system. If there are others besides the adapter you intend to flash, remove them! You also need to disable a few BIOS settings. This step is not optional. In your server BIOS, disable all of the following:

Processor Settings > Virtualization Technology
Integrated Devices > SR-IOV Global Enable
Integrated Devices > I/OAT DMA Engine

Note: If you're flashing a full size card on a non-Dell system, such as an AMD based desktop or server, make sure you find any BIOS settings related to IOMMU and Virtualization, and disable them

You also must set the server boot mode to BIOS, not UEFI:

Boot Settings > Boot Mode > Set to BIOS

When you're finished with this guide, don't forget to go back and enable Virtualization, as well as SR-IOV if you plan to use it. Switch boot mode back to UEFI as well if you were using it previously. But only once you've finished the guide!

Remove the RAID battery from the adapter. The IT firmware has no cache for the battery to back, in fact the IT firmware will have no clue the battery is there if you leave it connected. To make matters worse, in rare cases some people observed the battery holding old Dell code in the card's RAM and it made their crossflash process a pain. Just unplug/remove the battery and store it somewhere in case you return to Dell firmware.

Download the Dell Perc Flashing ZIP (md5sum 42a87bd496f6d2e3aa1d8f2afe3cc699, version 2.1 last updated 2022-04-08).

Boot to the FreeDOS iso image flashed to a flash drive. Ensure that this command:

C:\>info

Returns the right values:

Product Name    : PERC H710 Mini
ChipRevision    : D1
SAS Address     : [whatever; save this value]

For example, a SAS address looks like 5b83a730cf784b32.

Now the instructions continue for this model card on page H710, and below here in this post.

Cleaning the card

Still in FreeDOS, run the following command to wipe the flash on the card and get rid of all Dell firmware. This will also flash the required SBR:

BIGD1CRS

Follow the prompts. If it finishes without error, it's time to reboot into Linux. Get the Linux live ISO from the ZIP ready to boot from, then tell FreeDOS to reboot:

reboot

Linux Time

You should now be booted into the Linux ISO from the ZIP. Use the following credentials to login: user/live

We highly recommend SSH'ing to the live ISO so you can copy/paste commands and not have to use the iDRAC virtual console. To do so, run the following to find the IP of the install:

ipinfo

It should spit out an IP. SSH to it, using the same user/live credentials. This is not required and you can continue on using the iDRAC (or physical) console, but it will be slightly more inconvenient. Flashing IT Firmware

Now, still in Linux, we need to change to the root user:

sudo su -

Now we run the flashing script. Issue the following command to begin the process:

D1-H710

It should automatically do everything required to flash the card. If you don't get any unexpected errors and it completes, we need to reboot and program the SAS address back to finish. See the following note.

Note: For some reason, the very first boot after crossflashing the card will cause a kernel panic - I believe it's iDRAC not letting go of something (I was able to see the card put in a fault state via the debug UART when this happens). This only happens the first reboot after crossflashing. When you boot back into the live ISO and get the panic, either let it reboot itself, or use iDRAC to force a reboot. After that boot back into the live ISO again and all will be well.

Programming SAS address back

Now rebooted back into the live Linux image, just run the following commands, filling in the example address with your own, that you noted down earlier:

sudo su - setsas 500605b123456777

It should succeed without errors. That's it! You can run the following command to get some info about your new card. You should be able to see your SAS address and the same firmware version:

info

    Controller Number              : 0
    Controller                     : SAS2308_2(D1)
    PCI Address                    : 00:02:00:00
    SAS Address                    : 0000000-0-0000-0000
    NVDATA Version (Default)       : 14.01.00.06
    NVDATA Version (Persistent)    : 14.01.00.06
    Firmware Product ID            : 0x2214 (IT)
    Firmware Version               : 20.00.07.00
    NVDATA Vendor                  : LSI
    NVDATA Product ID              : SAS9207-8i
    BIOS Version                   : N/A
    UEFI BSD Version               : N/A
    FCODE Version                  : N/A
    Board Name                     : SAS9207-8i
    Board Assembly                 : N/A
    Board Tracer Number            : N/A

Optional: Boot Images

Note: flashing these can add up to 2 minutes to server boot time if you have a lot of drives. Be sure you need them!

If you need to boot from drives connected to this adapter, you'll need to flash a boot image to it. Otherwise, skip it. This is what gives you the "press blahblah to enter the LSI boot configuration utility" text when the server boots. To flash the regular BIOS boot image:

flashboot /root/Bootloaders/mptsas2.rom

If you want to UEFI boot from drives connected to this adapter, you need to flash the UEFI boot image (the card can have both UEFI and BIOS boot images flashed):

flashboot /root/Bootloaders/x64sas2.rom

You can now ditch the live images and boot back into your normal system.

Optional: Reverting

If for some reason you need to revert back to the stock Dell PERC firmware, that's easy. Boot back into the FreeDOS live image, and run the following command:

BIGD1RVT

That's it! When it finishes, just reboot back to your normal system with the reboot command.

Note: This uses the unmodified latest Dell firmware 21.3.5-0002,A09 extracted from the update EXE found here.

My experience

At one point when flashing the firmware, I ran into an error, 524288. Reference 5 has a suggestion: ensure you power off the system, unplug for at least 20 seconds, and also ensure the raid card battery is disconnected.

reboot the server without continuing and try running d1 cross again. make sure you have the battery unplugged from the controller, and remove power from the server for like 20 seconds or so before trying again

References

  1. Found this whole process on reddit
  2. Main flash article
  3. Second part, for H710 D1 full card
  4. Handled 524288 error
  5. Dell Perc Flashing ZIP and my local copy

Improving accessibility in InfCloud

I just wrote about my calendar solution, which uses Radicale and my fork of InfCloud.

I finally got around to solving an ultimately minor problem, but it makes my day so much better! The application does not display a focus indicator, i.e., a way to know which field you are editing. Text fields have the text cursor of course, but a lot of the inputs are drop-downs (technically they are html selects). I'm hardly a css expert, so it took me quite a while to find the part of the default.css that affects the elements.

I started with the language selector drop-down on the login page. I eventually discovered that it was the selector *:focus. I modified the outline: attribute and then I could see which field was selected! I gave *:focus a visible red outline, so now as I tab around on any screen, and even the buttons, I can see which field is currently selected.

I also modified the login screen so that "login" button is selectable, so I can tab to it. Now, a javascript event still captures the "Enter" keypress, but still, I'm used to tabbing to a submit button and then selecting it. And now I can.

See the commit in the repo (or on gitlab), or here:

diff --git a/radicale_infcloud/web/css/default.css b/radicale_infcloud/web/css/default.css
index 14e8156..64aeef7 100644
--- a/radicale_infcloud/web/css/default.css
+++ b/radicale_infcloud/web/css/default.css
@@ -409,9 +409,10 @@ body, input, select, textarea
    position: static;   /* required by fullcalendar */
 }

-*:focus
-{
-   outline: none;
+/* stackrpms removed outline none which is bad for keyboard navigation */
+*:focus {
+   outline: 1px solid rgb(180,0,0);
+   -moz-outline: 1px solid currentcolor;
 }

 select
@@ -768,7 +769,7 @@ input[type=text], input[type=password]
 {
    height: 19px;
    margin-left: 0px;
-   outline: none;
+   /* stackrpms removed outline none which is bad for keyboard navigation */
    border: 0px;

    padding-left: 2px;   /* it resizes the input size :( */
@@ -826,9 +827,10 @@ textarea
    resize: none;
    padding-left: 3px;

-   outline: none;
-   -moz-outline: none;
-   -moz-border-radius: 0px;
+   /* stackrpms removed outline none which is bad for keyboard navigation */
+   /*outline: none; */
+   /*-moz-outline: none; */
+   /*-moz-border-radius: 0px; */

    /* mobile safari remove rounded corners */
    -webkit-appearance: none;
diff --git a/radicale_infcloud/web/index.html b/radicale_infcloud/web/index.html
index 26ee0d5..39b8583 100644
--- a/radicale_infcloud/web/index.html
+++ b/radicale_infcloud/web/index.html
@@ -93,10 +93,7 @@ along with this program. If not, see <http://www.gnu.org/licenses/>.
                            </td>
                        </tr>
                        <tr>
-                           <td><img data-type="system_login" alt="login" title="login" src="images/login.svg" onclick="if(event.shiftKey) ignoreServerSettings=true; $(this).closest('form').find('[type=\'submit\']').click();" /></td>
-                       </tr>
-                       <tr style="display:none">
-                           <td><input type="submit" /></td>
+                           <td><input type="image" data-type="system_login" alt="login" title="login" src="images/login.svg" onclick="if(event.shiftKey) ignoreServerSettings=true; $(this).closest('form').find('[type=\'submit\']').click();" /></td>
                        </tr>
                    </table>
                </form>

Ldap auth for my cgit project

I have previous written about my cgit solution for my network. With my recent work on my calendar solution, I bothered to get around to adding basic authentication with ldap backend.

So, I also modified my cgit/git solution to use ldap auth.

My apache configs are now separated into even more included files!

File /etc/httpd/conf.d/cgit.conf

Alias /cgit-data /usr/share/cgit
ScriptAlias /cgit /var/www/cgi-bin/cgit
RedirectMatch ^/cgit$ /git/
<Directory "/usr/share/cgit/">
   AllowOverride None
   Require all granted
</Directory>

File /etc/httpd/conf.d/main.conf

SetEnv GIT_PROJECT_ROOT /var/www/git
SetEnv GIT_HTTP_EXPORT_ALL
SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER
SetEnv GITWEB_CONFIG /etc/gitweb.conf
# This file will not work when it is in /usr/sbin.
ScriptAlias /git/ /usr/libexec/git-core/git-http-backend-internal/
<Directory "/usr/libexec/git-core*">
   Options +ExecCGI +Indexes
   Order allow,deny
   Allow from all
   Require all granted
</Directory>
# a2enmod macro
<Macro Project $repository $rwstring $rostring>
   <LocationMatch "^/git/$repository.*$">
      AuthName "Git Access"
      Include conf.d/auth.cnf
      #AuthUserFile /etc/git_access
      Require $rwstring
      Require $rostring 
   </LocationMatch>
   <LocationMatch "^/git/$repository/git-receive-pack$">
      AuthName "Git Access"
      Include conf.d/auth.cnf
      #AuthUserFile /etc/git_access
      Require $rwstring
   </LocationMatch>
</Macro>
# Protect everything under git directory...
<Directory "/var/www/git">
   Require all denied
</Directory>
# ...Unless given permissions in this file.
Include /etc/git_access.conf
# https://ic3man5.wordpress.com/2013/01/26/installing-cgit-on-debian/
# depends on confs-enabled/cgit.conf
<Directory "/usr/share/cgit/">
   SetEnv CGIT_CONFIG /etc/cgitrc
   SetEnv GIT_URL cgit
   AllowOverride all
   Options +ExecCGI +FollowSymLinks +Indexes
   DirectoryIndex cgit.cgi
   AddHandler cgi-script .cgi
   RewriteCond %{REQUEST_FILENAME} !-f
   RewriteCond %{REQUEST_FILENAME} !-d
   RewriteRule (.*) /cgit/cgit.cgi/$1 [END,QSA]
</Directory>

And now, I load /etc/httpd/conf.d/auth.cnf which is my common authentication rules.

# File: /etc/httpd/conf.d/auth.cnf
# Startdate: 2022-05-22 14:32
# Usage: included by main config file in a few places
AuthType Basic
Order deny,allow
Deny from all
Satisfy any
AuthBasicProvider ldap
AuthLDAPGroupAttribute member
AuthLDAPSubGroupClass group
# If anonymous search is disabled, provide dn and pw.
#AuthLDAPBindDN uid=service-account,cn=users,cn=accounts,dc=ipa,dc=example,dc=com
#AuthLDAPBindPassword mypw
AuthLDAPGroupAttributeIsDN On
AuthLDAPURL "ldaps://dns1.ipa.internal.com:636 dns2.ipa.internal.com:636/cn=users,cn=accounts,dc=ipa,dc=internal,dc=com?uid,memberof,gecos?sub?(objectClass=person)"
#?sub?(objectClass=*)
# My radical set up uses HTTP_X_REMOTE_USER as username for authentication
RequestHeader set X_REMOTE_USER "%{AUTHENTICATE_uid}e"
# This does not populate correctly. Probably my group attribute settings are wrong?
RequestHeader set X_GROUPS "%{AUTHENTICATE_memberOf}e"
# This populates correctly
RequestHeader set X_GECOS "%{AUTHENTICATE_gecos}e"

And of course, /etc/git_access.conf

# File /etc/git_access.conf
# Part of cgit solution for Internal network, 2021-04-15
# The last phrase can be "all granted" to allow anybody to read.
# Use httpd "Require" strings for param2, param3. Param2 grants read/write permission, Param3 is read-only.
#Use Project dirname "user alice bob charlie" "all granted"
#Use Project dirname "user charlie" "user bob alice"
Use Project 7w "user bgstack15" "all granted"
Use Project "chicago95-packaging/chicago95-packages" "user bgstack15" "all granted"
Use Project "el7-gnupg2-debmirror/libassuan" "user bgstack15" "all granted"

I tried making it so I could use globs or regular expressions in the values in this git_access.conf file, but I couldn't figure that out. So instead of chicago95-packaging/* I had to stick to naming every directory underneath that.

So, nothing groundbreaking today.

Calendar solution for internal network

Overview

As part of my efforts to de-google my life, the Calendar solution is a fully self-hosted and FLOSS project. This project uses Radicale caldav server, which is already packaged for EL7 (but I repackage a newer version). The project also uses InfCloud which is a web client for caldav, which is loosely hardcoded to use this existing radicale instance.

Additional components include the F-droid packages of davX⁵ (carddav sync utility) and ETar, a calendar client.

Prequisites

Apache httpd is installed. TLS certificates are in use, to protect the password traffic.

Building

CentOS 7 provides radicale 1.1, which is not the current version. I adapted the radicale and various python dependencies from Fedora, and also wrote the rpm spec for InfCloud. A copy of Git repository for build-radicale-el7 exists here. See build-radicale-el7/README.md for this whole process. NOTE: this includes one patch that I wrote to allow automatic login using the reverse proxy apache ldap authentication.

I established a copr to hold the rpm packages.

Installing on production

On server1, I ran a number of steps which are also documented in the [main server log][3].

Add the copr repo.

curl https://copr.fedorainfracloud.org/coprs/bgstack15/radicale-el7/repo/epel-7/bgstack15-radicale-el7-epel-7.repo | sudo tee /etc/yum.repos.d/bgstack15-radicale-el7.repo
sudo yum install radicale3 infcloud

Customize /etc/radicale/config and /etc/infcloud/config.js.

Add redirects for my http virtual host in httpd:

# Force https for these pages or apps
RewriteCond %{HTTPS} !=on
Redirect /radicale/ https://www.example.com/radicale/
RewriteCond %{HTTPS} !=on
Redirect /calendar/ https://www.example.com/calendar/

Also add useful VirtualHost contents to my apache config file ssl-common.cnf:

# 2022-05-17
#ServerName calendar.example.com
RewriteEngine On
RewriteRule ^/radicale$ /radicale/ [R,L]
<Location "/radicale/">
   ProxyPreserveHost On
   Order deny,allow
   Deny from all
   AuthType Basic
   AuthName "LDAP protected"
   AuthBasicProvider ldap
   AuthLDAPGroupAttribute member
   AuthLDAPSubGroupClass group
   # If anonymous search is disabled, provide dn and pw.
   #AuthLDAPBindDN uid=service-account,cn=users,cn=accounts,dc=ipa,dc=example,dc=com
   #AuthLDAPBindPassword mypw
   AuthLDAPGroupAttributeIsDN On
   AuthLDAPURL "ldaps://dns1.ipa.internal.com:636 dns2.ipa.internal.com:636/cn=users,cn=accounts,dc=ipa,dc=internal,dc=com?uid,memberof,gecos?sub?(objectClass=person)"
   #?sub?(objectClass=*)
   Require valid-user
   Satisfy any
   # My radical set up uses HTTP_X_REMOTE_USER as username for authentication
   RequestHeader set X_REMOTE_USER "%{AUTHENTICATE_uid}e"
   # This does not populate correctly. Probably the ldap memberOf attribute is derived and not real?
   RequestHeader set X_GROUPS "%{AUTHENTICATE_memberOf}e"
   # This populates correctly
   RequestHeader set X_GECOS "%{AUTHENTICATE_gecos}e"
   ProxyPass        http://localhost:5232/ retry=20 connectiontimeout=300 timeout=300
   ProxyPassReverse http://localhost:5232/
   RequestHeader    set X-Script-Name /radicale
</Location>

I customized the storage of calendars (aka collections) to be on the main storage area:

sudo mkdir -p /var/server1/shares/public/Support/Systems/server1/var/lib/radicale/collections
sudo mv /var/lib/radicale/collections /var/server1/shares/public/Support/Systems/server1/var/lib/radicale
sudo ln -s /var/server1/shares/public/Support/Systems/server1/var/lib/radicale/collections /var/lib/radicale/collections
sudo semanage fcontext -a -t radicale_var_lib_t '/var/server1/shares/public/Support/Systems/server1/var/lib/radicale/collections(/.*)?'

I also added all these aforementioned config files to the host-bup config file.

Start and enable radicale service.

sudo systemctl enable radicale
sudo systemctl start radicale

Make a symlink in the web root directory that points to the infcloud contents.

sudo ln -s /usr/share/infcloud/radicale_infcloud/web /var/www/html/calendar

Making good config choices

In order for InfCloud to save which calendars are enabled/visible by default, you need to turn "settingsAccount" on in config.js. This attribute causes InfCloud to store some metadata about the user choices on the caldav server (Radicale). Without this feature, the InfCloud user cannot even change which calendars are visible!

Summary of associated files

On server1, the production server:

  • /etc/radicale/config
  • /etc/infcloud/config.js symlinked to with /var/www/calendar/
  • /etc/infcloud/cache.manifest symlinked to within /var/www/html/calendar/
  • /usr/sbin/update-infcloud-cache
  • /var/www/html/calendar is a symlink to /usr/share/infcloud/radicale_infcloud/web

Operations

Some tasks that will happen over time are listed here.

Modifying InfCloud files or config

After making any changes to anything for InfCloud (the web client), you need to update the cache manifest file. It is possible that touching the file works, but a method for updating the version number in the file is with script:

sudo update-infcloud-cache

Controling collections

To add, modify, or remove personal collections (calendars, address books, and todo lists), visit https://www.example.com/radicale/. Log in with a domain credential.

Adding a client

To connect a carddav/caldav client to the Calendar service, use url https://www.example.com/radicale/ and select "Use username and password."

Using a web calendar

To use a rich web client, visit https://www.example.com/calendar/ and type in your domain username and password.

Sending invitations for event from web calendar

In the web client, select an event. Select button "Download," which saves a .ics file. Send this .ics file in your preferred mail client to your guests.

Importing event to web calendar

Feature not implemented yet. Gotta say unh!

Sharing calendar with other Radicale user

This step has to happen first, before the following Sharing operations can work.

An admin has to modify file storage:/etc/radicale/rights and add some steps. Create one rule per [uniquely-named-section]. Use uppercase permission letters for a "root" collection, i.e., a username. Use lowercase letters for any child collections. The uuid of a collection is required: the pretty name is not accepted.

The following is a way to allow all authenticated users to enumerate the items owned by domainjoin, and also read all those items.

[public-principal]
user: .+
collection: domainjoin
permissions: rRi
[public-calendars]
user: .+
collection: domainjoin/[^/]+
permissions: rRi

Adding a "w" or "W" as appropriate to the permissions: entry would enable write access. Another example, for username2 to write to a specific bgstack15 calendar:

[rule1]
user: username2
collection: bgstack15
permissions: R
[rule2]
user: username2
collection: bgstack15/68cb9dbf-7546-8023-ca4c-c69bc64918a2
permissions: rwi

It is probable that read access to the root collection is required in order for the web calendar to be able to enumerate the one shared calendar, but further experimentation should be done.

Opening a shared calendar in web calendar

Unfortunately this so far is only known to work by modifying /etc/infcloud/config.js which requires admin access to server1. If all users of the web calendar should be able to view/edit a collection, add the owning user to attribute additionalResources: ['user5'] inside var globalNetworkCheckSettings. In this example, user5 owns a shared collection, as defined by heading Sharing calendar with other Radicale user. Of course, after modifying config.js, run update-infcloud-cache.

For a specific login to access another specific login root collection (because InfCloud relies on enumerating collections underneath a root collection, i.e., user, rather than pointing to a specific collectioN), you can use the custom stackrpms attribute perUserAdditionalResources such as this example:

var globalNetworkCheckSettings={
  perUserAdditionalResources: [
     { name:"bgstack15", allowed: ["username2"] },
     { name:"username2", allowed: ["bgstack15"] }
  ]
}

Now, the shared collections should populate in the web calendar. If the logged-in user does not have access to this resource, an unfixable and unhideable exclamation mark "!" will appear in the web calendar in the left-hand side menu. It will not be clear to the user what has gone wrong.

Opening a shared calendar with davX⁵

Unfortunately the only plan for accessing shared calendars is to set up a new connection in DavX⁵ with the exact url of the target calendar, e.g. https://www.example.com/radicale/domainjoin/338119a7-3556-0a9b-67ab-be391db1ae77/ which is selectable for copy-paste when the owning user visits /radicale/ and enumerates his collections.

This task will make a new entry that enumerates that logged-in user collections, and also this one exact collection.

Future improvements

Investigate more calendar sharing. Migrate address books.

References

Internal files

[3]: server1 history log

Git repositories

[6]: https://gitlab.com/bgstack15/radicale_auth_ldap.git I ended up not using this, but it is worthy of mentioning.

Weblinks

conaclos An interesting alternative is Radicale 1. Both are very lightweight. I mainly chose Radicale over Baïkal because it is written in Python. On the client side DAVx⁵ 2 do a good job. 1 https://radicale.org/v3.html 2 https://www.davx5.com/ cube00 I've been using Radicale for a few years now and it has been fantastic. Extremely lightweight but also quite flexible with its permissions model since we have a shared family calendar. The backend storage is simply ics/vcf files and while I'm sure it's not the most efficient if you had a large number of users, for our small group it's been perfect and very satisfying knowing your data is there in plain text files. Although if I'm honest I'm just cheap and wanted to get by on the smallest VM offered by my cloud provider and NextCloud was too demanding for that. jamessb > Extremely lightweight but also quite flexible with its permissions model since we have a shared family calendar. How are you doing this? A while ago I skimmed the documentation for a couple of CalDAV servers to try and figure out how I could self-host a shared calendar, but couldn't see an easy way to do this. I've just done some more searching, and it seems there are two suggested ways to do this with Radicale: * create a separate account for the shared calendar, and tell everyone who needs write access the password * create the calendar in one use's directory, and add a symlink to it in the user directories for any other users who need write access. Both of which seem like a bit of hack compared to bring able to explicitly state that a list of users have write access to a calendar in a config file or through a UI. cube00 I created a collection with the name of our domain name and then used this example1 to regex the domain out of the user's login email address to allow them access to the shared collection. 1: https://github.com/Kozea/Radicale/blob/497b5141b066d266c318e...

Example: Grant users of the form user@domain.tld read access to the

collection "domain.tld"

Allow reading the domain collection

[read-domain-principal]

user: .+@([^@]+)

collection:

permissions: R

Allow reading all calendars and address books that are direct children of

the domain collection

[read-domain-calendars]

user: .+@([^@]+)

collection: {0}/[^/]+

permissions: r

Git hook post-receive run restorecon

I use cgit for myself, which is available at /cgit. I wrote about it previously about a year ago:

And I have now improved my process with a post-receive hook. This task runs every time the server handles an upload. My hook does a few things:

* generates metadata to store the most-recently-modified timestamp for this repo, for sorting in the cgit web view.
* runs `restorecon` to get my proper context, so new branches are visible in cgit web view.

Configure SELinux

Of course I use SELinux, so I need a custom policy for my git-hook to work. I used the standard mechanisms to troubleshoot SELinux.

sudo setenforce 0
semodule --disable_dontaudit --build
echo "" | sudo tee /var/log/audit/audit.log 1>/dev/null
# perform a large number of git and cgit operations
sudo tail -n15000 /var/log/audit/audit.log | audit2allow -M foo
# manually merge any new entries into cgitstackrpms.te
semodule --build
_func() { sudo checkmodule -M -m -o cgitstackrpms.mod cgitstackrpms.te && sudo semodule_package -o cgitstackrpms.pp -m cgitstackrpms.mod && sudo semodule -i cgitstackrpms.pp ; } ; time _func

Final asset is cgitstackrpms.te which can be loaded and installed with the last line of the above command block. The most recent version includes the rules for the restorecon invocation from the post-receive git hook.

module cgitstackrpms 1.2;

require {
    type default_context_t;
    type git_script_t;
    type httpd_cache_t;
    type httpd_sys_content_t;
    type httpd_sys_rw_content_t;
    type httpd_t;
    type initrc_var_run_t;
    type selinux_config_t;
    type shadow_t;
    type sssd_conf_t;
    type systemd_logind_sessions_t;
    type systemd_logind_t;
    type var_t;
    class capability { audit_write fowner net_admin sys_resource };
    class dbus send_msg;
    class dir { relabelfrom relabelto getattr open read search };
    class fifo_file write;
    class file { getattr map lock open read relabelfrom relabelto };
    class netlink_audit_socket { create nlmsg_relay read write };
    class process { setrlimit noatsecure rlimitinh siginh };
}

#============= git_script_t ==============
allow git_script_t var_t:dir read;
allow git_script_t var_t:file { read getattr open };
allow git_script_t httpd_cache_t:dir { getattr open read search };
allow git_script_t httpd_cache_t:file { map getattr open read };
allow git_script_t httpd_sys_content_t:dir { getattr open read search };

#============= httpd_t ==============
allow httpd_t git_script_t:process { noatsecure rlimitinh siginh };
allow httpd_t default_context_t:file { getattr open read };
allow httpd_t httpd_sys_content_t:dir relabelto;
allow httpd_t httpd_sys_content_t:file relabelto;
allow httpd_t httpd_sys_rw_content_t:dir relabelfrom;
allow httpd_t httpd_sys_rw_content_t:file relabelfrom;
allow httpd_t initrc_var_run_t:file { lock open read };
allow httpd_t self:capability { net_admin audit_write fowner sys_resource };
allow httpd_t self:netlink_audit_socket { create nlmsg_relay read write };
allow httpd_t self:process setrlimit;
allow httpd_t selinux_config_t:file { getattr open read };
allow httpd_t shadow_t:file { getattr open read };
allow httpd_t sssd_conf_t:file { getattr open read };
allow httpd_t systemd_logind_sessions_t:fifo_file write;
allow httpd_t systemd_logind_t:dbus send_msg;

#============= systemd_logind_t ==============
allow systemd_logind_t httpd_t:dbus send_msg;

I already had a file context for my entire www directory.

semanage fcontext -a -t httpd_sys_content_t '/var/server1/shares/public/www(/.*)?'
Running restorecon after each push

The git hook for post-receive runs after every push to this cgit instance. Sometimes new files are buitl with incorrect selinux file contexts, but a restorecon will fix the contexts. Special permissions are needed, in the selinux policy above, and also for sudo.

My environment uses FreeIPA, so I can make a rule in the domain for this. Even though of course apache is a local user, I want to make a domain user so I can make a sudoers rule for it. Sudo reads usernames from sudoers entries, and not uids, so this will work for a local user apache on the target host.

# gid 960600013 is my preexisting service-accounts group
ipa user-add --homedir=/usr/share/httpd --shell /sbin/nologin apache --first=apache --last="domain user" --gidnumber=960600013
ipa group-add-member service-accounts --users apache 
ipa sudocmd-add '/sbin/restorecon -R /var/www/git'
ipa sudocmd-add '/sbin/restorecon -Rv /var/www/git'
ipa sudocmd-add '/sbin/restorecon -Rvn /var/www/git'
ipa sudorule-add apache-restorecon-cgit
ipa sudorule-add-host apache-restorecon-cgit --hosts server1
ipa sudorule-add-allow-command apache-restorecon-cgit --sudocmds /sbin/restorecon
ipa sudorule-add-allow-command apache-restorecon-cgit --sudocmds '/sbin/restorecon -R /var/www/git'
ipa sudorule-add-allow-command apache-restorecon-cgit --sudocmds '/sbin/restorecon -Rv /var/www/git'
ipa sudorule-add-allow-command apache-restorecon-cgit --sudocmds '/sbin/restorecon -Rvn /var/www/git'
ipa sudorule-add-option apache-restorecon-cgit --sudooption '!requiretty'
ipa sudorule-add-option apache-restorecon-cgit --sudooption '!authenticate'
ipa sudorule-add-user apache-restorecon-cgit --users apache
ipa sudorule-mod apache-restorecon-cgit --desc="Apache can run restorecon /var/www/git on server1"

If you are using plain sudo, use these contents.

# File /etc/sudoers.d/60_git-hook_post-receive_sudo
Defaults>apache   !requiretty
apache  server1=(root)       NOPASSWD: /usr/sbin/restorecon -R /var/www/git, /usr/sbin/restorecon -Rv /var/www/git, /usr/sbin/restorecon -Rvn /var/www/git

With all of these steps, the user apache can now run restorecon on that directory as part of the git-hook for post-receive.

All of that, to explain one line in the following git hook file, /usr/local/bin/git-hooks/post-receive.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
#!/bin/sh
# File: post-receive
# Project: cgit-Internal
# Startdate: 2022-01-27
# History:
#    2022-05-11 added restorecon
# Ripped directly from https://landchad.net/cgit
# Dependencies:
#    sudoers rule in freeipa
#exec 1>>/tmp/asdf2
#exec 2>&1
#exec 3>&1
set -x

agefile="$(git rev-parse --git-dir)"/info/web/last-modified
date "+%FT%T used post-receive" >> /tmp/asdf2

mkdir -p "$(dirname "$agefile")" &&
git for-each-ref \
    --sort=-authordate --count=1 \
    --format='%(authordate:iso8601)' \
    >"$agefile"

sudo --non-interactive /usr/sbin/restorecon -R /var/www/git 1>/dev/null 2>&1 &

This git hook can be established for all future git repositories established as part of my documented solution, by modifying the skeleton git hooks directory to store it:

sudo ln -sf /usr/local/bin/git-hooks/post-receive /usr/share/git-core/templates/hooks/post-receive

Operations

An additional operation is available for the cgit-Internal solution.

Refreshing last-modified date for all repos

If for some reason the the repositories should get refreshed info/web/last-modified contents, you can run the hook manually. This command will switch to user apache so the permissions are preserved on the info/web/last-modified files.

On server1:

sudo chown -R apache.admins /var/www/git/{*,*/*}/info/web
su apache -s /bin/bash -c 'cd /var/www/git ; for word in {*,*/*}/hooks/post-receive ; do ( cd "$( dirname "${word}" )" ; sh "./$( basename "${word}" )" ; ) ; done'

Update ipasam rpm

Overview

Update-ipasam-rpm is a project that facilitates building a custom rpm with just the one file that samba needs to authenticate users to ipa.

Update-ipasam-rpm upstream

Gitlab is the upstream. This is original work.

Reason for existing

The proper ipa-server-trust-ad rpm has a large number of dependencies which are not necessary merely when using samba with ipa user authentication, so this project copies that file out and builds a small rpm just for the one file. This makes my samba file server need fewer packages installed.

Alternatives

Install ipa-server-trust-ad with all of its dependencies, which is way more than I want to use when I just need ipasam.so.

Dependencies

For any rpm-based system, but primarily AlmaLinux 8, the server that runs the cron job needs a few things:

  • Http/s access to a package mirror
  • Https access to copr to initiate builds
  • Https access to gitlab to pull ipasam.spec template
  • Packages: copr-cli, rpmbuild
  • Unprivileged user, shown in this documentation as username copruser

The copr api can be used with a copr user with a generated api key (Reference 1).

Files in the project

  • ~copruser/.config/copr
  • ~copruser/.config/ipasam
  • ~copruser/bin/update-ipasam-rpm.sh
  • /etc/cron/70_ipasam_cron
  • ~copruser/.cache/ipasam.spec (generated by modifying spec from this repo)
  • ~copruser/.cache/ipa-server-trust-ad.ver (generated)
  • ~copruser/rpmbuild/SRPMS/iapsam*.srpm (generated)

Usage

  • Create user copruser or other unprivileged user as desired.
  • Deploy the non-generated files from this repository to the above locations and inspect the config files.
  • Prepare a user on the copr and generate an api key and place in file ~copruser/.config/copr.
  • Run the command manually to see it operate.

    ~/bin/update-ipasam-rpm.sh

  • Visit your copr project to see the ipasam rpm that you built!

Differences from upstream

N/A

References

  1. COPR API introduction

Devuan: fix rsyslog hang on shutdown

I use nfs mounts in my GNU/Linux network. I should investigate using freeipa-defined autofs mounts, but I have never gotten around to that. For now, I still use hardcoded entries in /etc/fstab on all systems.

I finally bothered to research why rsyslog tends to hang at shutdown/reboot of my Devuan GNU+Linux systems. Its dependencies are out of order with unmounting nfs! Perhaps the authors of the rsyslog init script think that rsyslog writes to an nfs location. If I were to ship logs elsewhere, I think I would use native rsyslog network capabilities and not depend on the filesystem.

So, by writing a custom init script adapted from /etc/init.d/umountnfs.sh, I can get my systems to shutdown correctly instead of hanging for an indeterminate amount of time (sometimes until I forcefully power off).

See downloadable file stackrmps-nfs.sh, whose contents are also listed inline here. Observe that I have added to Should-Stop (so stop this service file before stopping each of these listed services) items connman and wicd, which are various network managers I've used. Wicd was fine, until python2 was removed because it was way past EOL. Connman is OK, and is only slightly better than wicd was.

#!/bin/sh
# File: /etc/init.d/stackrpms-nfs.sh
# Author: bgstack15
# Startdate: 2022-05-01
# Usage:
#    After deploying this file, run `update-rc.d stackrpms-nfs.sh defaults`
# Reference:
#    https://wiki.debian.org/LSBInitScripts/
#    https://forums.debian.net/viewtopic.php?t=70798&start=30
### BEGIN INIT INFO
# Provides:          stackrpms-umountnfs
# Required-Start:
# Required-Stop:     
# Should-Stop:       $network $portmap nfs-common connman wicd
# Default-Start:
# Default-Stop:      0 6
# Short-Description: Unmount my nfs mounts
# Description:       Customized for stackrpms usage from umountnfs.sh
# chkconfig: 2 100 0
### END INIT INFO
PATH=/usr/sbin:/usr/bin:/sbin:/bin
KERNEL="$(uname -s)"
RELEASE="$(uname -r)"
. /lib/init/vars.sh
. /lib/lsb/init-functions
do_stop () {
   for word in $( mount | awk '/type nfs/{print $3}' | sort -r ) ;
   do
      umount --lazy --force "${word}" &
   done
   # for good measure
   sleep 1
   :
}
case "$1" in
  start|status)
    # No-op
    ;;
  restart|reload|force-reload)
    echo "Error: argument '$1' not supported" >&2
    exit 3
    ;;
  stop|"")
    do_stop
    ;;
  *)
    echo "Usage: stackrpms-nfs.sh [start|stop]" >&2
    exit 3
    ;;
esac
:

References

  1. shutdown hangs stopping enhanced syslogd: rsyslogd - Page 2 - Debian User Forums

Awstats browsers and resetting statistics

I want to improve the browser user agent capture of my statistics. The main browsers I use (and that I project onto my audience) include palemoon and LibreWolf, so I wanted to show those different version numbers like Firefox and Chrome. I modified a few important files:

File /usr/share/awstats/wwwroot/cgi-bin/awstats.pl:

--- /usr/share/awstats/wwwroot/cgi-bin/awstats.pl.2022-04-29.01 2020-12-30 22:19:32.000000000 +0000
+++ /usr/share/awstats/wwwroot/cgi-bin/awstats.pl   2022-04-29 20:40:51.521656143 +0000
@@ -18110,6 +18110,9 @@
    my $regvermsie11      = qr/trident\/7\.\d*\;([a-zA-Z;+_ ]+|)rv:([\d\.]*)/i;
    my $regvernetscape    = qr/netscape.?\/([\d\.]*)/i;
    my $regverfirefox     = qr/firefox\/([\d\.]*)/i;
+   my $regverlibrewolf   = qr/librewolf\/([\d\.]*)/i;
+   my $regvernewmoon     = qr/newmoon\/([\d\.]*)/i;
+   my $regverpalemoon    = qr/palemoon\/([\d\.]*)/i;
    # For Opera:
    # OPR/15.0.1266 means Opera 15 
    # Opera/9.80 ...... Version/12.16 means Opera 12.16
@@ -19745,6 +19748,33 @@
                            if ($PageBool) { $_browser_p{"firefox$1"}++; }
                            $TmpBrowser{$UserAgent} = "firefox$1";
                        }
+                       
+                       # LibreWolf ?
+                       elsif ( $UserAgent =~ /$regverlibrewolf/o
+                           && $UserAgent !~ /$regnotfirefox/o )
+                       {
+                           $_browser_h{"librewolf$1"}++;
+                           if ($PageBool) { $_browser_p{"librewolf$1"}++; }
+                           $TmpBrowser{$UserAgent} = "librewolf$1";
+                       }
+                       
+                       # newmoon ?
+                       elsif ( $UserAgent =~ /$regvernewmoon/o
+                           && $UserAgent !~ /$regnotfirefox/o )
+                       {
+                           $_browser_h{"newmoon$1"}++;
+                           if ($PageBool) { $_browser_p{"newmoon$1"}++; }
+                           $TmpBrowser{$UserAgent} = "newmoon$1";
+                       }
+                       
+                       # palemoon ?
+                       elsif ( $UserAgent =~ /$regverpalemoon/o
+                           && $UserAgent !~ /$regnotfirefox/o )
+                       {
+                           $_browser_h{"palemoon$1"}++;
+                           if ($PageBool) { $_browser_p{"palemoon$1"}++; }
+                           $TmpBrowser{$UserAgent} = "palemoon$1";
+                       }

                        # Chrome ?
                        elsif ( $UserAgent =~ /$regverchrome/o ) {

I also modified file /usr/share/awstats/lib/browsers.pm to have relevant entries to my target audience's tastes.

--- /usr/share/awstats/lib/browsers.pm.2022-04-29.01    2017-02-20 23:35:50.000000000 +0000
+++ /usr/share/awstats/lib/browsers.pm  2022-04-29 20:40:16.032063869 +0000
@@ -25,10 +25,10 @@

 # Relocated from main file for easier editing
 %BrowsersFamily = (
-   'msie'      => 1,
-   'edge'      => 2,
-   'firefox'   => 3,
-   'netscape'  => 4,
+   'librewolf' => 1,
+   'newmoon'   => 2,
+   'palemoon'  => 3,
+   'firefox'   => 4,
    'svn'       => 5,
    'opera'     => 6,
    'safari'    => 7,
@@ -52,6 +52,10 @@
 'links',
 'lynx',
 'omniweb',
+'librewolf',
+'palemoon',
+'newmoon',
+'webbrowser',
 # Other standard web browsers
 '22acidownload',
 'abrowse',
@@ -264,6 +268,10 @@
 %BrowsersHashIDLib = (
 # Common web browsers text, included the ones hard coded in awstats.pl
 # firefox, opera, chrome, safari, konqueror, svn, msie, netscape
+'librewolf','LibreWolf', # must happen before Firefox
+'palemoon','Pale Moon',
+'newmoon','newmoon',
+'webbrowser','web browser',
 'firefox','Firefox',
 'opera','Opera',
 'chrome','Google Chrome',
@@ -525,6 +533,10 @@
 'svn','subversion',
 'msie','msie',
 'edge','edge',
+'palemoon','newmoon',
+'newmoon','newmoon',
+'librewolf','librewolf',
+'webbrowser','web browser',
 'netscape','netscape',

 'firebird','phoenix',

I also added the relevant icon files:

/usr/share/awstats/wwwroot/icon/browser/librewolf.png
/usr/share/awstats/wwwroot/icon/browser/newmoon.png

Reload statistics

I had to develop this step to rebuild my stats with these browsers to look for, as well as to catch an IP address block I added near the start of this month.

To clear out the compiled statistics Reference 1. You might want to archive your statistics beforehand, of course, but the simple version is just to delete the files.

rm /var/lib/awstats/regular/awstats*

And now rebuild statistics (with the -update) and generate new static pages per month and year. My stats started in September 2021.

time sudo /usr/share/awstats/tools/awstats_buildstaticpages.pl -update -config=doc7-01a -configdir=/etc/awstats -dir=/var/lib/awstats/static -builddate="2021-09" -year="2021" -month="09"

And now the rest of the months, and then by years:

time for d in 2021-{10..12} 2022-{01..04} ; do Y="$( echo "${d}" | awk -F'-' '{print $1}' )" ; M="$( echo "${d}" | awk -F'-' '{print $2}' )" ; sudo /usr/share/awstats/tools/awstats_buildstaticpages.pl -config=doc7-01a -configdir=/etc/awstats -dir=/var/lib/awstats/static -builddate="${Y}-${M}" -year="${Y}" -month="${M}" ; done
time for d in 202{1..2} ; do Y="$( echo "${d}" | awk -F'-' '{print $1}' )" ; sudo /usr/share/awstats/tools/awstats_buildstaticpages.pl -config=doc7-01a -configdir=/etc/awstats -dir=/var/lib/awstats/static -builddate="${Y}" -year="${Y}" -month="all" ; done

References

  1. https://awstats.sourceforge.io/docs/awstats_faq.html#RESET