notes about computers by ~pgadey rss
it is okay to not grok vim
2024-07-04-4 at 09h

Lots of people who start using vim attempt to learn all of it. There are oodles of YouTube videos about how to maximize your efficiency using vim. It's a bit of a trope.

I want to tell you to ignore that whole vibe. It is okay to not grok vim. You don't need to know every single shortcut. In fact, you only really need a handful of commands.

It is entirely possible to write code, or a blog post, or a novel, just using the very basic motion commands. If you can enter insert mode via i, escape it via esc, and navigate around via hjkl then you're pretty much good to go. Of course, you'll want to save your work to via esc + :write foo.txt. If you want to get out of vim, use esc + :quit. But, like, that's it.

Vim is a robust tool. It has a vast array of interesting and useful features. For example, I love the ability to have a zillion different registers for copying and pasting things. I use them all the time. There are other features, like marks, which I think are very cool but can never quite remember. I've learned marks half a dozen times, but use them so rarely that they never stick. I bet there is someone, somewhere, who is the exact opposite of me. They probably use marks all the time and never bother with more than one register. I want to make the point that everyone's vim style is unique.

I think that a lot of the hype around learning vim, and deeply understanding it, is misplaced. The gains in efficiency diminish very quickly. Your time would be better spent writing. Or thinking of possible projects to work on. Or walking in nature.

links 001
  • : A nice collection of community resources for helping build the small web.
  • : The Web Revival: A Guide
    • : A long list of Web Revival Manifestos
  • : Changelogs
  • "Writers and talkers and leaders, oh my!"
  • : How to style an RSS feed!
    • I first noticed you could do something like this when I found Rach's feed.
  • : A variety of cool hacks and ideas about computers.
    • : "The advice [to save the date of anything you work on] can be expanded to save more metadata about the things you work on."
  • : A manifesto for brutalist web design
2024-03-20-3 at 15h

This is a re-write of script that I've used to generate my micro-blog since December 22nd 2014. Why fix something that isn't broken? I'm re-writing it to generate RSS, and fix up some weirdness in the original that I've been using forever. Also, I want to write something in Dave Gauer's RubyLit framework for literate programming.

A major constraint of this re-write is that I don't want to change the current soc file format at all. I don't want to have to go back, and parse the ~300 currently existing entries and munge everything in to som new format.

It was nice to read the RSS Spec to learn enough RSS to make this.

The plan is to re-write the whole thing as a new bash script.

<<Bash Headers>>

<<Posting Logic>>

<<Create the HTML Header>>
<<Create the RSS Header>>

<<The Main Loop>>

<<Create the RSS Footer>>
<<Create the HTML Footer>>

Bash Headers

This version of is going to be another bash script. It'll be a hot mess of code, and we need to initialize some stuff.




DATE=$(date +%F)
TIME=$(date +%R)

Posting Logic

#echo $#; # the number of arguments?

case "$1" in 

    vim $InputDir/$DATE.soc;;

     echo "You requested build." ;;

     echo "<li><a id=\"$DATE-$TIME\" href=\"$IndexWebPath#$DATE-$TIME\">[$TIME]</a> $* </li>" >> $InputDir/$DATE.$Extension ;;


The Main Loop

The generator loops through all the soc files in the input directory. For each one, it will create a bit of HTML for the webpage and RSS for the feed.

The generator loops through all the soc files in the input directory. I am going to use a bad bash programming style to loop over these.

for file in $(ls -r $InputDir/*.$Extension); do

This is not great, because the output of ls is fragile. It there are spaces, or weird characters, everything could blow up. The way that you're supposed to do things is:

#for file in $InputDir/*.$Extension; do

However, we know the format of the $InputDir. It's a bunch of files with names like: 2024-03-01.soc.

BASENAME=$(basename --suffix=.$Extension $file);
<<Create HTML for Each Date>>
<<Create RSS for Each Date>>

Create the HTML Header

This whole static micro-blog generator is a hot mess of here-docs. We're just going to dump the headers into an index.html file.

cat >$OutputDir/index.html <<EOF

Notice that the proceeding line cat >$OutputDir/index.html <<EOF will overwrite index.html. This is intentional. It means that every time is run, it will create a fresh index.html.

<!DOCTYPE html>

<title>~pgadey's micro-blog</title>
<meta charset="UTF-8" />
<link rel="stylesheet" type="text/css" href="screen.css">
<h1>~pgadey's &micro;-blog</h1>
<a href="">home</a>
<a href="#about">about</a>
<a href="">rss <img src="rss.svg" alt="rss feed icon" width="15" style="width: 15px; top: 3px; position: relative;"></a>

This completes the header, so we close out the here-doc.


Create HTML for Each Date

The naming convention for soc files is that all the posts for a date get munged together in a single file, one per line. For example, all the entries for March 1st 2024 will get put in 2024-03-01.soc. And so, we can figure out the date of a post by looking at its $BASENAME.

echo "<h2><a id=\"$PostDATE\" href=\"#$PostDATE\">$PostDATE</a></h2>" >> $OutputDir/index.html

The entries in a soc file are made by appending them one-by-one. And so, if we print them as they appear in the soc file then they'll appear in chronological order within each date. This would create a jumpy reading experience. For example:

  • 2024-03-02
    • Morning
    • Afternoon
    • Evening
  • 2024-03-01
    • Morning
    • Afternoon
    • Evening

And so, we reverse the order of the entries in each soc file using sort -r and get something like this:

  • 2024-03-02
    • Evening
    • Afternoon
    • Morning
  • 2024-03-01
    • Evening
    • Afternoon
    • Morning

This puts them in reverse chronological order with the most recent entry appearing at the top, and everything following monotonically back to the first post.

echo "<ul>" >> $OutputDir/index.html
sort -r $file >> $OutputDir/index.html
echo "</ul>" >> $OutputDir/index.html

(To be honest: I'm not sure what is "chronological order", and what is "reverse chronological order".)

Create RSS for Each Date

Now we make an <item> in the RSS feed.

echo "<item>" >> $OutputDir/index.xml

According to the RSS 2.0 Spec, every <item> must have at least one of: title or description. One of the annoying things about the soc format is that there is no good way to extract a title as each file is just a snippet of HTML. As such, I'm choosing (and this is lame) to make everything "micro-blog post". I think that this is just a bit better than leaving them blank.

echo "<title>micro-blog post</title>" >> $OutputDir/index.xml

My RSS reader of choice (newsboat) defaults to using a snippet of the link as a title for posts that lack titles. This doesn't read especially well as you get stuff like Index.html#2024 02 25 for a title... Not great. I'll stick with "micro-blog post".

According to the RSS 2.0 Spec, all the dates must be in RFC822 format. The date commands --rfc-email which outputs date and time in RFC5322 which supercedes RFC822.

echo "<pubDate>$(date --date="$PostDATE" --rfc-email)</pubDate>" >> $OutputDir/index.xml

We put a link for the item. (We could also put a <guid> which specifies the globally unique id for the post, but I think that that is not needed in this use case.)

echo "<link>" >> $OutputDir/index.xml
echo "$IndexWebPath#$PostDATE" >> $OutputDir/index.xml
echo "</link>" >> $OutputDir/index.xml

We want to generate a somewhat useful description by stripping all the HTML tags from the file. We do this in a somewhat brutal way using sed. The plan is to remove anything between matching triangular brackets and hope for the best.

echo "<description>" >> $OutputDir/index.xml
sed 's/<[^>]*>//g' $file >> $OutputDir/index.xml
echo  "</description>" >> $OutputDir/index.xml

echo "</item>" >> $OutputDir/index.xml

Create the HTML Footer

cat >>$OutputDir/index.html <<EOF
<p id="about">

echo $(ls -1 $InputDir/*.$Extension | wc -l) "days with records." >> $OutputDir/index.html
echo $(wc -c $InputDir/*.$Extension | tail -n1) " characters." >> $OutputDir/index.html

cat >>$OutputDir/index.html <<EOF
This page was generated by a modified version of <a href="">soc</a> written by
For details about the modification, check out <a href="">my write-up</a>.
<div id="sitelink" style="width=100%;text-align:center;">
<a href=""><em></em></a><br>

Create the RSS Header

As we did with the HTML header, we're going to overwrite any existing index.xml.

cat >$OutputDir/index.xml <<EOF
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="">
<title>~pgadey's micro-blog</title>
<description>Recent content on ~pgadey's micro-blog</description>
<generator>the updated script</generator>
<atom:link href="" rel="self" type="application/rss+xml"/>

One annoying bit of the RSS 2.0 specification is that it requires dates in RFC822 format. One can hack them together using date via: date +%a,\ %d\ %b\ %Y\ %T\ %z. (Later on, I abandon this idea an use RFC5322.)

# RSS requires dates in RFC822 format.
echo "<lastBuildDate>$(date +%a,\ %d\ %b\ %Y\ %T\ %z)</lastBuildDate>" >> $OutputDir/index.xml

However, it seems that this is handled by date --rfc-email.

Create the RSS Footer

cat >>$OutputDir/index.xml <<EOF

Shell Script


rubylit.rb soc 

sed --in-place 's/^    //g' # get rid of line initial spaces (needed for EOF to work)

rubylit.rb soc "Shell Script"
rubylit.rb soc ./output/screen.css "The Stylesheet"
rubylit.rb soc ./output/rss.svg "Dave's RSS Icon"
rubylit.rb soc "The Old SOC Micro-blog Generator"

ssh "meta-bake"

./ --build
scp ./output/*

The Old SOC Micro-blog Generator


# soc - Stream Of Consciousness mini-logger
# Copyright 2014 David Meyer <> +JMJ
# grabbed from :

# Modified by pgadey to include:
#   -- total entry count
#   -- anchors to all dates

# to do:
#   -- split in to static blog generator and post creator



date=$($DATE +%F)
time=$($DATE +%R)

$ECHO "<li><a id=\"$date-$time\" href=\"$date-$time\">[$time]</a> $* </li>" >>$socdir/$date.$socext

$CAT >$outfile <<EOF
<!DOCTYPE html>

<title>~pgadey's micro-blog</title>
<meta charset="UTF-8" />
<link rel="stylesheet" type="text/css" href="./screen.css">
<h1>~pgadey's &micro;-blog</h1>
<a href="">home</a>
<a href="#about">about</a>

for f in $($LS -r $socdir/*.$socext); do
$ECHO "<h2><a id=\"$fd\" href=\"$fd\">$fd</a></h2>" >> $outfile  # add headers with anchors for each date
$ECHO "<ul>" >>$outfile
$SORT -r $f >>$outfile
$ECHO "</ul>" >>$outfile

$CAT >>$outfile <<EOF
<pre id="stats">

#$FORTUNE >>$outfile

echo $(ls -1 $socdir/*.$socext | wc -l) "days with records" >> $outfile

$cat >>$outfile <<eof
<div id="sitelink" style="width=100%;text-align:center;">
<a href="/"><em></em></a><br>
<p><small>this page was generated by a modified version of <a href="">soc</a> written by <a href="">~papa</a>@<a href=""></a>.</small></p>



cp /home/pgadey/public_html/soc/index.html /home/pgadey/public_html/stream-of-consciousness.html 

Create a New Entry

tk: this needs some work! Write up a bit more logic.

# dump all the positional arguments of in to a new list item.
echo "<li><a id=\"$DATE-$TIME\" href=\"#$DATE-$TIME\">[$TIME]</a> $* </li>" >> $InputDir/$DATE.$Extension

Dave's RSS Icon

Molly's RSS Icon

<?xml version="1.0" encoding="UTF-8" standalone="no"?>

The Stylesheet

body {
width: 50em;
margin: 0 auto;
font-family: Courier New, courier;
background: black;
color: #FF7E00;

h1 {
text-align: left;
font-size: 1em;
color: black;
background: #FF7E00;
padding: 0 10px;

h2 {
text-align: left;
font-size: 1em;
font-weight: bold;
padding: 0 10px;

p {
padding: 0 15px;
text-indent: 2em;

ol {
margin-left: 25px;

ol.indented {
margin-left: 100px;

a {
color: #FFBF80;
font-weight: bold;
text-decoration: none;

a:link {

/* unvisited link */
a:link {
color: red;

/* visited link */
a:visited {
color: green;

/* mouse over link */
a:hover {
color: hotpink;

/* selected link */
a:active {
color: blue;

.footer {
font-size: smaller;
font-style: italic;
overthinking blogging

It's really easy to overthink blogging.

Should you censor yourself and only talk about nice things? Professionally relevant things? Should you post about your family? How should your run your blog? Should you use a nice platform like Bear or use a static site generator like Hugo? If you're setting up your own site using something like Hugo, there is endless room for fiddling. You can add backlinks, or little status updates, or fancy micro-blog posts. I call all of this "fooling with tooling".

What about week notes? How does that fancy plus ⊕ button work? It seems to only show up on mobile. See, it's so easy to go down a rabbit hole looking at other pages.

Or link lists? What about getting started with #100DaysToOffload? Should you setup a secret RSS only club? Here are 100 things you can do on your personal website. Are you worrying about doing all of them?

You could write a post.

It's easy to overthink blogging, but you don't have to. Keep writing, keep posting, and you'll find a good groove.

too many outputs

If you grew up around the Internet, then you might have too many outputs. Suppose that something cool happens to you or you have some novel idea that you want to share with the world. Where should you post it? Should you write an article about it? Should you post it on your blog? Your secret microblog? Instagram? Twitter? Should you e-mail it to a friend? Write a letter to someone? Make a little zine about it? Tell someone that you live with?

This is a weird consequence of the internet and our hyper-connected world. Previously, people had very few options for publishing thing. There were very few ways to share information with a large audience. You needed a serious setup to publish anything.

And now, we have too many ways to publish things.

For me, I think that the solution to be intentional about what each venue is for.

a decade of indexing

Recently, I've been on a deep-dive into indexing my journal. The index to my journal turned ten years old on October 24th 2024. The main thing that I want to do here is reflect on a some of the things I've learned during this deep dive. For a bit of background on this process, see my previous posts:

Consistency Across Time

The first thing that I've learned is that the Index has been growing steadily. The growth rate is almost constant, with only a few wobbles away from a straight light.

On average, I write in my journal every other day. The length of the index, as measured by total line count, grows at roughly 5 lines per day. The number of unique subjects grows at about 2.5 subjects per day.

Number of Days Per Volume

The slowst volume was Vol 1 (2013-10-24 to 2014-05-29) and the fastest volume was Vol 11 (2016-07-24 to 2016-08-21). As I noted in the micro-blog, I'm surprised that the behaviour is a little bit periodic. I'm curious to see if that pattern persists.

Script for calculating the days per volume

 print "volume, start date, end date, time in days"
 for file in ./date/date.*
    START=$(cat $file | sed -n '/^20*/p' | sed 's/://' |  head -n1);
    END=$(cat $file | sed -n '/^20*/p' | sed 's/://' |  tail -n1);
    echo "$(basename $file | sed 's/date\.//'), $START, $END, $(dateutils.ddiff --format=%d $START $END)";

Frequency of Subjects

The growth rate of the number of unique subjects got me thinking about the distribution of usage frequency. Given a subject, how many times does it occur in the index? It turns out that 6000 of the 9000 subjects occur exactly once. The most frequent subject is @22Oakmount, the address that I lived at during the height of the COVID lockdowns. The next highest ranking subjects are as follows:

 64 -- UTSC
 66 -- Rich Furman
 69 -- Sam Chapin
 70 -- Robert Young
 73 -- Meeting
 118 -- DELTA
 142 -- @TTC
 146 -- DIAGRAM
 211 -- @22Oakmount

Every Day of the Calendar is Covered

The following table displays how many journal entries there are for each calendar day of the year. This might be the only good use of mm-dd formats. An entry of the format "mm-dd N" means that the Index contains N entries for the calendar day with month mm and day dd. For example, "02-03 4" means that there are four entries for February 3rd.

01-01 5 | 02-01 5 | 03-02 3 | 04-02 5 | 05-02 6 | 06-02 3 | 07-03 2 | 08-03 6
01-02 5 | 02-02 3 | 03-03 6 | 04-03 2 | 05-03 5 | 06-03 6 | 07-04 4 | 08-04 3
01-03 4 | 02-03 4 | 03-04 5 | 04-04 8 | 05-04 4 | 06-04 6 | 07-05 6 | 08-05 3
01-04 5 | 02-04 7 | 03-05 5 | 04-05 6 | 05-05 3 | 06-05 5 | 07-06 5 | 08-06 4
01-05 5 | 02-05 5 | 03-06 6 | 04-06 4 | 05-06 4 | 06-06 5 | 07-07 7 | 08-07 5
01-06 3 | 02-06 5 | 03-07 7 | 04-07 6 | 05-07 6 | 06-07 4 | 07-08 4 | 08-08 5
01-07 6 | 02-07 4 | 03-08 8 | 04-08 1 | 05-08 4 | 06-08 8 | 07-09 4 | 08-09 4
01-08 4 | 02-08 7 | 03-09 3 | 04-08 1 | 05-09 5 | 06-09 4 | 07-10 8 | 08-10 2
01-09 5 | 02-09 6 | 03-10 5 | 04-09 4 | 05-10 6 | 06-10 5 | 07-11 4 | 08-11 2
01-10 3 | 02-10 5 | 03-11 3 | 04-10 3 | 05-11 4 | 06-11 6 | 07-12 7 | 08-12 3
01-11 3 | 02-11 5 | 03-12 4 | 04-11 2 | 05-12 5 | 06-12 5 | 07-13 3 | 08-13 5
01-12 2 | 02-12 4 | 03-13 4 | 04-12 6 | 05-13 4 | 06-13 3 | 07-14 4 | 08-14 5
01-13 3 | 02-13 5 | 03-14 3 | 04-13 3 | 05-14 5 | 06-14 5 | 07-15 4 | 08-15 5
01-14 3 | 02-14 4 | 03-15 5 | 04-14 6 | 05-15 5 | 06-15 4 | 07-16 3 | 08-16 7
01-15 5 | 02-15 5 | 03-16 6 | 04-15 5 | 05-16 7 | 06-16 6 | 07-17 3 | 08-17 5
01-16 5 | 02-16 1 | 03-17 7 | 04-16 5 | 05-17 6 | 06-17 5 | 07-18 4 | 08-18 7
01-17 5 | 02-16 2 | 03-18 6 | 04-17 6 | 05-18 4 | 06-18 6 | 07-19 2 | 08-19 5
01-18 6 | 02-16 5 | 03-19 3 | 04-18 6 | 05-19 4 | 06-19 3 | 07-20 5 | 08-20 5
01-19 4 | 02-17 7 | 03-20 6 | 04-19 7 | 05-20 2 | 06-20 8 | 07-21 4 | 08-21 5
01-20 7 | 02-18 8 | 03-21 4 | 04-20 6 | 05-21 7 | 06-21 5 | 07-22 3 | 08-22 6
01-21 6 | 02-19 5 | 03-22 4 | 04-21 3 | 05-22 4 | 06-22 3 | 07-23 5 | 08-23 5
01-22 2 | 02-20 4 | 03-23 4 | 04-22 4 | 05-23 4 | 06-23 3 | 07-24 6 | 08-24 4
01-23 4 | 02-21 5 | 03-24 3 | 04-23 8 | 05-24 4 | 06-24 6 | 07-25 5 | 08-25 3
01-24 8 | 02-22 7 | 03-25 6 | 04-24 7 | 05-25 4 | 06-25 2 | 07-26 7 | 08-26 4
01-25 7 | 02-23 6 | 03-26 5 | 04-25 9 | 05-26 9 | 06-26 5 | 07-27 5 | 08-27 6
01-26 4 | 02-24 7 | 03-27 5 | 04-26 5 | 05-27 7 | 06-27 4 | 07-28 4 | 08-28 5
01-27 2 | 02-25 1 | 03-28 3 | 04-27 6 | 05-28 8 | 06-28 1 | 07-29 2 | 08-29 3
01-28 3 | 02-26 3 | 03-29 7 | 04-28 3 | 05-29 8 | 06-29 5 | 07-30 7 | 08-30 4
01-29 4 | 02-27 4 | 03-30 3 | 04-29 3 | 05-30 3 | 06-30 2 | 07-31 3 | 08-31 3
01-30 7 | 02-28 6 | 03-31 5 | 04-30 5 | 05-31 7 | 07-01 4 | 08-01 6 | 09-01 4
01-31 6 | 03-01 5 | 04-01 4 | 05-01 6 | 06-01 4 | 07-02 1 | 08-02 4 | 09-02 4
09-03 2 | 09-18 3 | 10-03 7 | 10-18 5 | 11-02 9 | 11-17 8 | 12-02 5 | 12-17 6
09-04 6 | 09-19 1 | 10-04 4 | 10-19 5 | 11-03 7 | 11-18 2 | 12-03 5 | 12-18 7
09-05 4 | 09-20 5 | 10-05 6 | 10-20 7 | 11-04 5 | 11-19 6 | 12-04 6 | 12-19 4
09-06 6 | 09-21 6 | 10-06 4 | 10-21 3 | 11-05 4 | 11-20 4 | 12-05 4 | 12-20 7
09-07 7 | 09-22 5 | 10-07 3 | 10-22 7 | 11-06 6 | 11-21 6 | 12-06 7 | 12-21 4
09-08 7 | 09-23 6 | 10-08 4 | 10-23 6 | 11-07 5 | 11-22 4 | 12-07 4 | 12-22 5
09-09 6 | 09-24 6 | 10-09 4 | 10-24 5 | 11-08 4 | 11-23 6 | 12-08 5 | 12-23 7
09-10 4 | 09-25 5 | 10-10 2 | 10-25 6 | 11-09 6 | 11-24 6 | 12-09 5 | 12-24 6
09-11 6 | 09-26 5 | 10-11 4 | 10-26 4 | 11-10 8 | 11-25 5 | 12-10 4 | 12-25 3
09-12 5 | 09-27 6 | 10-12 6 | 10-27 7 | 11-11 8 | 11-26 7 | 12-11 4 | 12-26 3
09-13 4 | 09-28 5 | 10-13 4 | 10-28 3 | 11-12 3 | 11-27 5 | 12-12 6 | 12-27 6
09-14 5 | 09-29 6 | 10-14 4 | 10-29 5 | 11-13 1 | 11-28 5 | 12-13 3 | 12-28 7
09-15 3 | 09-30 4 | 10-15 6 | 10-30 6 | 11-14 7 | 11-29 5 | 12-14 7 | 12-29 7
09-16 4 | 10-01 8 | 10-16 6 | 10-31 3 | 11-15 9 | 11-30 7 | 12-15 3 | 12-30 2
09-17 3 | 10-02 6 | 10-17 4 | 11-01 6 | 11-16 6 | 12-01 7 | 12-16 2 | 12-31 1

The bash pipeline for generating this table

cat date.all | sed -n '/^20*/p' | tr --delete ':' | cut --delimiter=- --fields=2,3 | sort | uniq --count | cut --character=7- | cut -d ' ' --fields=1,2 | pr --columns=8 --omit-header --separator=" | "

Extremely detail-oriented people will notice that there are eight columns and there is an entry in every position. This is weird, because there are 365 days a year. (There are 366 days if you count leap years.) Neither of these numbers are divisible by eight! It seems that a couple "imaginary" dates snuck in to the record keeping, and so there are 368 entries in the table. If you hunt down the "imaginary" dates, please let me know.

On This Day

In my Journal Update from 2021-01-02, I wrote:

Full disclosure: The computer index is of questionable utility. I do not refer to things in the index very often in my day-to-day life. The most common use of the index is looking up all the references to someone who has recently died. I'm a member of a church with an aging congregation, and so it happens two or three times a year that I need to look someone up. Usually, the most active engagement with the index is during the winter holidays when I re-read the year's journals. That holiday tradition, in itself, justifies keeping an index.

Now that each day of the calendar is covered, I can ask my computer to look back and form an "on this day" view of the index. As of March 2024, I've written this functionality and found it very enlightening. It is striking to see how much I've grown and changed through time. I think that having a way to look back, on a day-by-day basis, to the index will dramatically change how I use it. There is a curious analogy to the Sapir-Whorf hypothesis here: the availability of a tool changes the sorts of things that we find salient.

         Journal: Re-reading old volumes (Vol 35 + 36) to get a sense of how we felt pre/post-Mira
short shell scripts

One of the joys of using Linux is writing really short shell scripts that do just one thing. They're so short that they barely count as "programming". I think that people are hesitant to share them because they're often so terse and fragile. However, I always love seeing when other people share their little hacks. One nice reference to this sort of stuff is datagubbe's page best of .bashrc. In hopes of encouraging more of sharing this sort of stuff, I'll write up a couple of them.

Here are some of my hacks, and how they came to be. For the past few months, I have had to do daily physio exercises. They're boring and tedious, and each of them needs to be done for a certain duration each day. To help with the timing, I use ffplay to ring a bell for me. It took a little while to sort out the options to run it optimally, but I wound up with the follow "physio timer".

sleep 120; 
ffplay -nodisp -autoexit bell.wav >/dev/null 2>&1; 

Recently, I found myself with a handful of papers to grade quickly. I wanted to spend at most five minutes per paper, so that I could get them done in time. And so, I asked my computer: "Please ring a bell every five minutes, and tell me to move on." Adding a single loop to the "physio timer" made a "paper grading timer".

while true; 
        ffplay -nodisp -autoexit bell.wav >/dev/null 2>&1; 
        echo "Keep moving!"; 
        sleep 300; 

Certainly, someone somewhere has written something really nice for this kind of task. But, I wrote something quick and dirty and that put a smile on my face.

Here is another one. Recently, I came across a hacker through the merveilles webring who listed their e-mail as a base64 encoded string together with a one-liner to decode it. (I'm really sorry, but I've forgetten their name and e-mail. If anyone knows this person, or can track them down, please let me know. I could not find them through Lieu.) I thought that was a really nice idea. It was like they were saying "If you're willing to run this one-liner on your computer, then I trust you and we should chat." This got me thinking how I would produce my own such decode-this-on-your-machine one-liner. Want to encode some text in base64 and then give people a Unix one-liner to decode it? Look no further!

echo "echo \"$(echo "INSERT YOUR TEXT HERE" | base64)\" | base64 --decode"  

So, there is an encoder that produces decorders. There is a Lewis Carroll / Alice in Wonderland vibe about this hack that I enjoy. Sometimes, these little shell scripts are so small that we just make aliases in .bashrc. There is a humour to this last one. Everytime I use it, I smile.

alias goodnight="sudo shutdown now"

And so, with this goodnight, I end this little article. If you've got any hacks that you enjoy using, please share them. A great way to contact me is via my e-mail. Alternatively, you could write up your little scripting hacks for the zine. I would love to read your article.

Happy Hacking!

writing on the alphasmart 3000

A close-up of an AlphaSmart 3000 displaying the words: "The AlphaSmart 3000 is single purpose word processing computer from the early 2000s. In this note, I'll describe how I use my AlphaSmart to write code"

The AlphaSmart 3000 is single purpose word processing computer from the early 2000s. In this note, I'll describe how I use my AlphaSmart to write code effectively, and how I use my headless server to upload content from the AlphaSmart to

tl;dr: The AlphaSmart 3000 is neat. If you like retro hardware and writing, then they're well worth the ~50$CAD it costs to buy one off eBay. They're suprisingly versatile and lots of fun.

The AlphaSmart 3000 has a four row dot matrix LCD display and 200kb of memory spread across eight files. It takes three AA batteries, which I'm told last about four hundred hours to a charge. (I've never had to replace them in the three years that I've used my AlphaSmart.) The way that that the AlphaSmart communicates with a computer is by emulating a USB keyboard. One plugs in the AlphaSmart, hits a Send button, and it manually "types" the contents of a file in to the computer as though it were a keyboard. This functionality lends itself to a nice hack that I'll describe below.

Writing Effectively

There are three hacks that I've found helpful on the AlphaSmart: keeping a table of contents, copying common code blocks, and manually generating raw TTY input. The AlphaSmart has eight "files" for storing text. One can copy and paste between the files freely. The search functionality searches all the files in numerical order.

I noticed that when I use the AlphaSmart after a long pause, I tend to forget which files had which projects or content in them. This led me to keep a "Table of Contents" in the first file. Whenever I turn on the AlphaSmart, I switch to File 1 and look at where everything is.

File 1 is also the first file to get searched when looking for text. This means that I keep all my re-usable code snippets in there. I tend to write a lot of lecture notes using LaTeX for my work. This requires lots of repetitive code blocks to make frames.

 %% QFRAME %%
 \begin{frame}{TITLE} % (fold)

 \end{frame} % (end)

I store these snippets of code in File 1, and access them using the search function. If I need to add a "question frame" to my lecture notes, I can search for QFRAME and pull up the required code in a few seconds. Some other things that I store in File 1 include: headers for my Hugo site, and a bit of raw TTY input to upload the contents of a file to

Transferring Content to

It is nice to write offline on the AlphaSmart 3000, but we have all come to expect our devices to have the ability to upload written material to the cloud. I usually write on the AlphaSmart in the basement, which happens to have a headless server in it. One day, it occurred to me that I could use the headless server to upload material from the AlphaSmart to

Setup the Headless Computer to Start without An X Server

This is the setup that I used on Ubuntu to make my headless server boot to login prompt. Edit /etc/default/grub with your favourite editor, e.g. nano:

sudo nano /etc/default/grub

Find this line:


Change it to:


Update GRUB:

sudo update-grub

Send The Text to the Headless Computer

In File 1, I have the following bunch of raw TTY input. It creates a file, opens it in ed, and dumps a bunch of raw text, writes the file, quits, and uploads it to The LOCAL-USERNAME is my username on my headless server, and USERNAME is my username on (In my case, these happen to be identical.)

ALPHASMART="alphasmart-$(date --iso=second).txt"
This is some text from the AlphaSmart
You can include all sorts of stuff here.
Except, of course, a line containing a single period.

One could really go nuts with this idea. I've thought of adding bells and whistles to notify me that everything was a success. If you play with these hacks, or even if you don't, please let me know! Thanks for reading.


An AlphaSmart 3000 sitting on a standing desk in a boiler room

Journal Update
2022-01-02-0 at 15h

This note is an update on the note that I wrote back in 2016-08-22.

In 2013, I started a system for augmenting my paper journal with a computer-searchable index. In this note, I'll describe how my system works, what tools I use to interact with with it, some of the unexpected emergent complexity of the indexing system, and what I'd like to do with it next.

Full disclosure: The computer index is of questionable utility. I do not refer to things in the index very often in my day-to-day life. The most common use of the index is looking up all the references to someone who has recently died. I'm a member of a church with an aging congregation, and so it happens two or three times a year that I need to look someone up. Usually, the most active engagement with the index is during the winter holidays when I re-read the year's journals. That holiday tradition, in itself, justifies keeping an index.

I started to keep a running index of my journal in graduate school. At the time, I was interested in board games and admired the work of Sid Sackson. In a magazine article about Sid, I found out that he kept a game development journal where he logged all the game-related activity in his life. Anything that seemed important was written in upper-case letters. At the end of each year, he manually compiled an index to that year's volume of the journal and would add a dot beside uppercase in the journal that made it to the index. Sid Sackson's diary has been scanned and put online by the Museum of Play. It is definitely worth checking out!

The Indexer

My method of indexing is similar to Sid Sackson's method, except that I use a computer to handle the compilation. The indexer is about fifty lines of Perl code. It takes a directory full of plaintext files, one per volume, and returns an index of the whole journal. For each volume, I make a plaintext file (called a date file) which lists the subjects mentioned on each date using the following bare-bones format:

        Shoulder problem
        Fisherman's Friend
        Dagmar Rajagopal
        Dagmar Rajagopal's memorial Meeting

The indexer then converts this information in to an index which lists where each subject appears in the journal. For example, the entry for @HartHouse reads:

    @HartHouse - 2017-09-21, 2017-09-27, 2017-10-02, 2017-10-04, 2017-10-13,
    2017-10-18, 2017-10-30, 2018-02-09, 2018-04-04, 2018-04-06, 2018-09-04

The indexer also prints some statistics about the index:

 Indexing all volumes.
 number of volumes
 days with entries
 number of references
 17021 ./date.all
 distinct subjects referenced
 7237 ./subject.all

Emergent Structure

The plaintext format of the index has almost no structure. It does not contain any kind of markup to denote what each line means. This is a weakness and a strength of the system. The indexer doesn't know anything about what each line means, but I am free to create any kind of structures that I want with plaintext.

The very first date file begins:

        Algebraic Geometry
        Bill Taber
        Flat Surfaces
        PGG Seminar
        Marcin Kotowski
        Matt Sourisseau
        Design a computer from scratch
        Sam Chapin
        Tyler Holden

These entries contain a mix of things: some math topics, a religion, an author, some more math topics, some friends, a project name, and some more friends. At this early stage, the index did not have any clear guidelines for formatting subjects, or what to include and exclude.

        MAT 246
        Tyler Holden
        Sam Chapin
        Mathematics in Canada

About two years later, on 2015-11-06, the first entry with a timestamp appears. The timestamps are written in the date files as [HH:MM]. The left bracket sets them apart in the index, and they all sort to a contiguous block of entries.

        Elizabeth Block
        Sylvia Grady
        Mark Ebden
        Camp NeeKauNis

On 2016-06-04, another structure emerged. Entries after this point typically include both an [HH:MM] timestamp and a @Location tag. Each entry in the (physical) journal typically begins with a line like:

 @HeronPark 2021 XII 20 II [21:35]

This is very far from the One True Date Format (ISO 8601). Allow me to explain. This date format is a mash-up of idiosyncracies. I learned about representing months using Roman numerals from Marcin and Michał Kotowski when they participated in the Probability, Geometry, and Groups Seminar. According to Wikipedia, this is usage of roman numerals for months is still common in Poland. Prior to learning about this convention, I wrote dates as 2021/12/20 which seems visually busy and homogenous. The Roman numerals break things up a bit. Also, it seems fitting to use Roman numerals for months.

The second Roman numeral in the date stamp is the day of the week. In the Quaker calendar, Sunday is the first day of the week. Sometimes, I have heard contemporary Quakers using ordinals for day names, but it is quite rare to hear anything other than "First Day".

 DIAGRAM - 2015-04-13, 2015-05-29, ...,  2020-12-28, 2021-02-02

There are some indexing conventions that I adopted early on and that I don't like very much. The worst is probably the convention for writing DIAGRAM when a diagram appears in an entry. There are ~150 entries with diagrams, and I have no idea what any of the diagrams represent. Someday, I might go back and track them all down, but it will be a lot of work. (At least I know where to look!)

The upper case convention for tagging information is not great. The intent was to have a mechanism for indexing structural things such as DIAGRAM, LIST, CALCULATION. Each of the these labels should have some more description added to it. The issue is that they don't sort to any particular place in the final index and so they are hard to track down. One needs to know, in advance, all the possible upper case subjects to find anything.

Brokhos - 2018-07-26
Brokhos (SF) - 2019-06-06
Brokhos (sf) - 2018-06-24, 2018-02-03, 2018-03-04, 2018-07-31

There are also things where the correct convention is slowly emerging. The three subjects above are supposed to refer to the string figure called Brokhos. I'm not sure if appending things like (sf) or (SF) is helpful. It is easy enough to look for all the string figures in the index by looking for (sf) or (SF). Perhaps a more useful convention would be "String Figure: Brokhos". Whenever I have looked up a string figure from the index, I just used its name and so have not needed the tag in brackets.

This project continues to grow and evolve. Some conventions have stabilized and are very helpful. Every year, new conventions crop-up. I have not (yet) gone back and revised the index for consistency, so there is a hodge-podge of competing conventions. This is not project for publication, but is an on-going exploration of writing and journaling.

Managing these files with vim

The workflow surrounding the index is vim based. I run vim ~/Work/Journal/date/date.* to open up all the date files in different buffers. This makes all the material in the index available when I want to start indexing a new volume.

The format for date files essentially uses one line per subject. Vim supports whole line completion using CTRL-X CTRL-L which pull possible line completions from all buffers. (For details, see: :help compl-whole-line.) This makes completing complicated names like "Ivan Khatchatourian" straightforward.

As I'm writing a date file, I keep the current date in the register "d (for date). While putting together the date file for the volume containing December 2021, I'll keep 2021-12-01: in the "d buffer.

So, the vim workflow looks like:

  • Paste in a date.
  • Modify it appropriately.
  • Use whole line completion to add new subjects
  • Repeat.

Pen and Paper

This write-up wouldn't be complete without saying a little bit about the physical side of the journals too. I've used hard bound 4x6" sketchbooks since the index started. They are absolutely indestructible, neither too big nor too small, and quite cheap. Another notable feature is that you can find them at any art supply shop.

I write with a Kaweco Sport Brass fountain pen using J. Herbin Lierre Sauvage ink. The brass pen has a nice weight to it. It's a pen that's hard to misplace and no on has walked away with it. I use it for all my writing because ball point pens severely aggravate my tennis elbow.

Originally, I got all my books from Toose Art Supplies because they were across the street from the math department. Now, I tend to get things from Midoco:

Closing Note

    "Your diary is an on-going and growing project. You are free to alter, change and experiment with it as you wish. Whoever receives it will make of it whatever they make of it. Strike out and explore. Or, dream the same old dreams. This is your place; enjoy it!"

Contact Me About This

There do not seem to be many people doing this sort of thing. The only examples that I know are Soren Bjornstad and Dave Gauer. If you're using computers to index your personal journal, or are interested in doing so, I would love to get in contact with you.


I have kept a hand written journal since I was a kid. The old lady we went to the theater with told me to keep a journal. Her advice stuck with me. That moment, in Aunt Kay's hallway, was a life changing experience for me.

Today was another milestone in journalling for me. Today, I carefully re-read the last several volumes of my handwritten journal looking for underlined passage which represent subject headings. These underlined key words serve to provide a series of "hyper links" within the hand written journal. They make a paper book about as useful as an online tool. This is the solution that I've adopted to the computer geek's dilemma:

Should I keep a blog or a hand written journal?

My answer is: Keep a hand written journal with a thorough index. You can consult your notes using the index, and this will allow you to "grep dead trees". Most journal entries that I write are personal, semi-private, matters. Writing with a pen on paper allows me to "keep part of my life offline".

Hand writing notes allows a flexibility of description and illustration that I find impossible to get with a computer. It is too difficult, for me at least, to make drawings or type math quickly on computers. The interface of the computer gets in the way. To put it plainly -- writing on paper is relaxing compared to writing on a computer.

Computers were made for tabulating indices.

Paper, pen, and notebook work well together.

Pen, paper, notebook, and computer generated index work perfectly together.

Today, I hunted down the underlined subject keywords and carefully stowed them away in plain text files. Once everything was typed in, I had the following epic computing experience:

   #look at the third volume
   $ cat ./date/date.3 
        Morse code
        Sponge Problem

   #look at the local file structure
   $ tree ./
       ├── date
       │   ├── date.01
       │   ├── date.02
       │   ├── date.03
       │   ├── date.04
       │   ├── date.05
       │   ├── date.06
       │   ├── date.07
       │   ├── date.08
       │   ├── date.09
       │   ├── date.10
       │   ├── date.11
       │   └── date.12
       ├── date.all
       ├── subject.all

      1 directory, 16 files

   #run the indexer
    Indexing all volumes.
    number of volumes
    days with entries
    number of references
    5050 ./date.all
    distinct subjects referenced
    2267 ./subject.all

"When I was a kid ..."

This current software setup is a long way from the early journals that I wrote in highschool. Roy MacDonald really helped me get started in journaling. He raised, with his own life, journaling to the level of a vocation. He was called to journal. Allow me to show you a broadsheet poster that Roy wrote:

 Journals Are ...

... an important way of confronting the confusions of our world and the complexities of life. They are an assertion of our personal worth and individuality.

... open and available to everyone who can write a few words on paper and to everyone who wishes to consider this experience of living.

... often written in the heat of the moment, at the scene, and without reflection. They are the record of immediate experience and original feeling. 

... natural resources which writers may store away for future use in prose or poetry.

... recordings of developing concepts, attitudes, ideas. They help to review our own progressions, changes, and patterns of behaviour.

... a source of stimulation for writers and are helpful in overcoming writing blocks. Often the basic recording of specific time and place details can generate other thoughts and recollections which encourage writing.

... useful in reviewing and reinforcing things we have learned and wish to remember.

... helpful in keeping us in touch with out ancesotrs and in projecting something of ourselves onward to future generations.

... miscellanies of things we find meaningful: a series of lines, verses, and quotations encoutered in our daily life.

... private worlds and secret places of our own where are free to be exactly who we are and to say exactly what we want to say.

 Roy N. MacDonald, 1981 

 To Parker, in friendship Roy, London Oct 28, 2010
 I wish you good writing and a wonderful life.

I agree with everything Roy wrote, and more. He was the model journaller for me. I think that the importance of a private journal for research was first taught to me by Roy.

On the other hand, Derek Krickhan models perfectly the private computer journaller. He has a 'fancy typewriter' that we writes all his entries in to it. I warn him, every chance I get, to back them up. No one knows if they ever come out of the fancy typewriter.

Heru Sharpe got me started on rather "experimental" journalling. He is a hardcore Kabbalist, and takes notes about all sorts of things. I'm sure that there is a lot of fascinating poetry, reflection, and alchemy in his journal. He got me writing about my own "investigations".

My interests in recreational reading, computer programming, naturalism, indoor gardening, astronomy, foreign languages, and low complexity art all show up under various guises in my journal. There are a lot of low level tricks built in to how I mark my entries. By selecting subject keyphrases carefully, one can emulate tags, categories, timing. Using a pen one can handle multiple written languages, various fonts, math, figures, etc. You can glue in interesting bits of paper.

The sky is the limit with hand written, well structured, notebooks.


Today I put some photos of the setup on Imgur here and posted about it on Reddit here.

The photos are here for local reference.

Screencasting With Gimp
2017-05-30-2 at 19h

Screencasting from GIMP

The current setup for screencasting with GIMP:

  • Open up GIMP
  • Open up gtk-recordmydesktop
  • Wacom Tablet (Intuous Pro 5 Small)


  • GIMP cannot cycle through pen colours: use the plugin (local mirror)

To install place it in: /usr/lib/gimp/2.0/plug-ins/ and make it executable. Once it is loaded by GIMP, set a key-binding using: Edit>Keyboard Shortcuts

Current Key-Bindings

Based on a great post by Bart Van Audenhove, I wrote a script to configure my Wacom tablet using xsetwacom. The script sets up the Wacom tablet in interact with X11.

  • ctrl-` : cycle through foreground colours

  • Wacom Left #1 (Top-top) -- zoom in

  • Wacom Left #3 (Top-bottom) -- zoom out

  • Wacom Left #4 (Bottom-top) -- Colour cycle

  • Wacom Left #5 (Bottom-mid) -- Undo


stow + git = version controlled dot files

Today I set up my system to use version controlled dotfiles via stow and git.

What are 'dotfiles'? They are configuration files that allow one to customize the behaviour of programs on Linux. They specify key-bindings, colour themes, etc. in plain text files. Often they are very personal. It is a little dizzying to use a familiar piece of software without the usual dotfiles in place.

What is GNU stow? It is a symlink manager that allows you to deploy and remove collections of symlinks conveniently. One creates several "packages" in directories, and then stow manages the task of creating or removing symlinks to the various files in these packages.

What is git? Git is Linus Torvald's other wunderkind. It is a version control system that tracks how files have been modified. Presently, it is the industry standard.

Example stow Usage

stow manages packages of files in the following way: A package is just a directory of files. When you stow a package, it will create symlinks to all the files in the package together with the appropriate file hierarchy.

For example, suppose you have the following file structure:

 ├── .foorc
 └── .config
     └── foo-config

 ├── .bar.ini
 └── .config
     └── bar
         ├── bar-config
         └── bar.theme

That is, you've got a dotfiles directory containing two packages foo and bar. Notice that both packages contain directories called .config. These directories allow you to seperate out the parts of the package foo that go in to ~/.config and the parts of bar that go in to ~/.config

If you enter the directory ~/dotfiles/ and run stow foo it will create symlinks at ~/.foorc and ~/.config/foo-config which point to the corresponding files in your dotfiles directory.

If you enter the directory ~/dotfiles/ and run stow bar it will create symlinks at ~/.bar.ini and ~/.config/bar/bar-config and ~/.config/bar/bar.theme which point to the corresponding files in your dotfiles directory.

(By default stow installs the package the directory containing the current working directory. One can change this using stow -t TARGET.)

Managing Dotfiles with Stow and Git

I followed the advice these people put up:

To get started:

  • Make a ~/dotfiles/ directory.
  • Initialize a git repo in that directory using git init.
  • Make a sub-directory for each package of configuration files you want to track. E.g: vim, screen, mc.
  • Copy the configuration files you want to track in to each ~/dotfiles/package/ directory.
  • Run stow --adopt package to adopt current dotfiles for each package.
  • Run git add . to add all the new file to the git repo.
  • Run git commit to commit the new files.
  • Run stow package to stow each package.

To maintain your repo:

  • Everytime you change a dotfile, update the git repo using git add .changed-dotfile
  • Commit when you feel like it.

Other useful things:

  • stow -D package removes the symlinks to a package
  • stow -R package to reload package. It removes, and then re-stows it.

ssh and vim

Two things that hose dotfiles I'd like to track are ssh and vim. Unfortunately, ~/.ssh/ and ~/.vim/ both contain sensitive data. One has my private keys, and the other has temporary files related to potentially sensitive documents.

Thus, I only track the relevant non-sensitive data in ~/dotfiles/







 $ mkdir ~/dotfiles/

 # setup the package of dotfiles for foo
 $ mkdir ~/dotfiles/foo/
 $ cp ~/.foo ~/dotfiles/foo/
 $ mkdir ~/dotfiles/foo/config/
 $ cp ~/.config/foo-config ~/dotfiles/foo/.config/foo-config

 # adopt the currently existing files for the package foo
 $ cd ~/dotfiles/
 $ stow --adopt foo

 # create symlinks for the package foo
 $ stow foo
2016-10-22-6 at 16h


Generate a quiz for each TA


Description of the course and quiz to be generated
    A31,6,Quiz \#6 on Limits

    The first line contains, in CSV,

    All other lines contain text which will used to fill TUTORIAL

A folder of files:


A bunch of quizzes


Very simple example of this set up is available here:

$ cat ./

A31,6,Quiz \#6 on Limits

$ cat ./template-head.tex

\title{\QuizBotCourse -- Quiz \QuizBotNumber -- \QuizBotTitle}



$ cat ./template-foot.tex

This is never printed


generating : A31-Quiz-6-TUT001-Alice
generating : A31-Quiz-6-TUT002-Bob

$ cat ./quizzes/A31-Quiz-6-TUT001-Alice.tex

\newcommand{\QuizBotTitle}{Quiz \#6 on Limits}
\title{\QuizBotCourse -- Quiz \QuizBotNumber -- \QuizBotTitle}


%% begin:  ./question-1/variant-1.tex
What is $\pi$
%% end:  ./question-1/variant-1.tex 

%% begin:  ./question-2/variant-a.tex
\[ \cos(a+b) \]
%% end:  ./question-2/variant-a.tex 

%% begin:  ./question-3/bar.tex
%% end:  ./question-3/bar.tex 

This is never printed
office cam

This is the current implementation of It takes a photo of the whiteboard, uploads it, and then processes it on cloudbox. The processing is pretty minor: it sharpens the image and makes a thumbnail.


 fileName=`date +%F+%T`.jpeg

 streamer -o $localStore/$fileName -j 100 -s 1280x720
 scp $localStore/$fileName $server:$remoteStore/$fileName

 ssh $server "mogrify -verbose -sharpen 0x1.5 $remoteStore/$fileName"
 ssh $server "mogrify -verbose -thumbnail 127x72 -path $remoteStore/thumbs/ $remoteStore/$fileName"

It would be nice to add some more functionality to

  • An argument for comments about the shot
  • An argument for naming the shot

The photos are available here:



meta-notebook manager

  • definitions

    • a page is a pdf of a single page of paper (US-letter)
    • a commentary is a set of human readable plaintext files
    • a note is a directory containing pages and a commentary
  • what is a commentary?

    • the human readable part of the note
    • it at least has a title and some kind of a type: is this note a chapter? a course?
    • some tags
    • a bunch of tiny text files
      • project details
      • progress report stuff
      • code
      • details about pages
      • it might even have some latex to compile
      • markdown blog entry / discussion
  • to assemble a note (e.g. foo) do several things:

    • assemble the commentary
      • if there is no commentary: make a minimal one with date of assembly
      • if there is a commentary:
        • log changes to commentary using git
        • put together the html version (use bake)
    • assemble the pages in pdf format
      • create a pdf of all the pages: foo.pdf
      • create a compressed pdf of all the pages: foo-tiny.pdf
    • assemble a simple web gallery:
      • create png versions of the pages
      • create png-thumbnail versions of the pages
      • create an html gallery of the png images: include the commentary
  • a sub-note is a directory inside a note

    • RECURSE!
    • the commentary of a note can describe how to assemble its sub-notes
    • thread-view of notes

whiteboard photos

  • single press button which takes a photo of whiteboard
  • puts the photo in a gallery together with minimal commentary (date)
  • the commentary allows photos to be tagged
  • name files (yyyy-mm-dd-N.png ?)
  • generate a very simple "permalink" to photo (md5 the photo or passphrase style?)

potential alternative use: simple use document scanner, book cataloguer

timelapse garden

  • it suffices to consider a single plant
  • automagically photograph a plant using a webcam every N seconds (timelapse)
  • allow for special frames or snapshots which illustrate something
  • stores the photos in a reasonable place related to the plant
  • name files (yyyy-mm-dd-hh:mm:ss.png -- O Time Thy Pyramids!)
  • allow for a commentary about the plant

  • to assemble a plant timelapse:

    • make a variety of scales of timelapse: 100x 10x 1x
    • make several different formats
    • generate a simple web page for the plant
    • specific hours of activity?
  • potential alternative use: weather station

poisson pings (time usage monitor)

  • pings according to a low intensity markov process
  • audible/visible chime
  • asks for tags describing current mental head space
  • what tasks are being done in the office, guests, students, etc
  • timelapse photo with face camera
  • timelapse photo with window manager screenshot
  • may give pleasant fortune?
  • specific hours of activity?

noise generator


  • lending library
  • real weather station
  • morse code whistling droid

    Perhaps plants, courses, articles, studies, etc. are all notes. A meta-notebook note has unique plaintext named images with individual commentaries, and a comment on the note as a whole.

Computers and Classrooms
2016-08-17-3 at 01h
  • Ursula Franklin

    • Holistic/Prescriptive
    • The Sense of the Classroom
    • Handbooks and Textbooks
    • Computers in Classrooms
    • Classrooms in Computers
  • Effective Scales:

    • 1 -- Tutoring
    • 2 -- Conversation
    • 15 -- Small Class
    • 100 -- Class
    • 100,000,000 -- Proper Class
  • MAT 237

    • Cryptographic game-ification of "content delivery"
    • Large and provably true promises to students (50% Big List)
    • Slides, Problems, Slides
    • Statistics of responses (Shannon entropy)
    • The Good:
      • The asymptotic analysis of information transfer argument
        • Chalk-and-Talk: one-to-n communication: O(n) information
        • Multiple-Choice-Split-and-Chat: (1/2 n)(1/2 n) = O(n^2) information
      • Core group of students who accepted the method.
      • Students worked very hard on the Big List.
      • Great office hours.
      • The prep work is much more fun.
    • The Bad:
      • Voter apathy.
      • Please vote. Voter apathy.
      • Students give up on parts of the gamification they do not enjoy.
      • Voter apathy. Do -- um -- something please.
      • Lectures are even more boring to deliver.
      • Epsilon attendance in morning lectures.
      • Students attempted, and failed, to meta-game the Big List.
    • The Ugly:
      • We pass over the Ugly.
    • Room for Improvement:
      • Enforced voting.
      • Enforced group discussion and completion of worksheets.
      • Statistics of responses (flashcard scheduling, Hermann Ebbinghaus 1885)
        • ``We need more data!''
        • ``Wir müssen wissen, wir werden wissen!" -- Hilbert
meta bake

I've been using bake to maintain this collection of notes.

To make things a little bit better, I've written a little script that I call meta-bake to bake all the subdirectories of my home folder that should be baked. This means that every instance folder in my notes directory gets hit.

This might be useful for other people using multiple bake blogs on

find . -name 'bakefile' -execdir bake {} +;
find . -name 'bakefile' -execdir git commit -m meta-bake {} +
Plant Cam

I have a webcam that I use to make timelapse videos of plants growing. There is also a lamp on a timer to keep the plant well lit through the night.

This is a local link to the cam.


  1. RaspberryPi (Thanks, Nick!)
  2. Lamp and timer
  3. Microsoft LifeCam HD-3000 (Res: 1280 x 720)

The first shot (2016-04-22):

The most recent shot (2016-07-01):

The data dump from the recent look at the hard drive.


 pgadey@raspberrypi /media/backup/webcam $ time du -hc | tail
 27G     .
 27G     total

 real    44m8.222s
 user    0m13.440s
 sys     1m13.520s
 pgadey@raspberrypi /media/backup/webcam $ ls | tail

Backup System
2016-04-22-5 at 23h

I am trying to set up a home back up system that I like. The goal is to have a place for storing my pdf library and pictures safely. Additionally, it would be nice to have music and videos conveniently accessible.

Presently, the plan is to have a central respository of stuff on an external hard drive attached to a RPi. I'm going to use sshfs to connect it all up.

Backup stuff:

  1. RaspberryPi (Thanks, Nick!)
  2. Seagate 1.5 Tb Expansion Portable (Model: SRD0NF1)
  3. Kingston 16Gb DataTraveler SE9
2016-02-10-3 at 23h

Automagic uploading with rsync and ssh sshfs

This page documents the brief script that I use for working with various remote servers using sshfs. The magic of sshfs makes working with remote file systems feel exactly like working with the local file system. For instance, it makes working with and my server, cloudbox, almost effortless.

The script below assumes the following set up: A local directory ~/sshfs/ with one folder per remote location, named after the host alias for that remote location as specified by ~/.ssh/config. My ~/sshfs/ directory looks like this:


and my ~/.ssh/config looks like:

    Host cloudbox

Further more, it assumes that every remote location has the same set up: One has the same username pgadey on each server and content is kept in remote:/home/pgadey/. Here is the script that I use to push new material to these hosts: (


    # use $sudo umount ~/sshfs/*
    # to unmount everything

    LIST=" cloudbox"

    for d in $LIST; do
        read -p "Do you want to mount $d (Y/[N])?" answer
        case $answer in
        [Yy]* ) sshfs $d:/home/pgadey/ /home/pgadey/sshfs/$d/;;
        * ) echo "Not mounting $d.";;
Textual Machines
2016-02-10-3 at 23h

Computers are the machines which manipulate text. Other machines might manipulate the physical world for humans benefit, but computers manipulate symbols for our benefit. Most people fault computers for not being 'fast' or 'smart', but computers only seem so because we've asked them to attempt the impossible: we wish them to simulate reality in a way that is pleasing to us.

If we lowered the bar on what we'll accept for a satisfying computer experience, then we'd all be rich beyond measure in terms of computing resources. As a means of transmitting, displaying, and re-arranging bits we know for a certainty that computers are great. We can have a great deal of fun with a simple network of tiny computers. The trick is to stick to what textual machines are good at doing for us. So, please remember that your computer is really quite good; you're just asking it to do the impossible.

2016-02-10-3 at 23h

Some tips on tools for communication / socializing

Learn to use screen.

screen lets you use multiple terminals through one ssh session; screen is very handy for multi-tasking. The most basic functionality is as follows: Run screen, then use the following key combinations. ctrl-a x means press ctrl and a simulataneously then release and type x.

  1. ctrl-a c will open a new terminal.
  2. ctrl-a " will list all open terminals.
  3. ctrl-a A will prompt to rename the current terminal.
  4. ctrl-a K will prompt to kill an active terminal.
  5. ctrl-a ? will bring up a useful cheat sheet.

As usual, check out man screen for all the details. screen is very helpful for keeping documentation up.


  1. In a new terminal, run watch who to see if anyone is currently on the server. This information will update every two seconds. Each line lists a person who is logged on, what terminal they are connected to, and when they logged on.
  2. To check how long someone has been idle, try finger USERNAME. This will tell you how long their terminal has been idle for.
  3. If someone is online, you can try to communicate with them through write. Try write USERNAME, type out your message, and then finish it by pressing ctrl-d (ctrl-d is interpreted as end of file). If they don't respond you can try write USERNAME TTY where the username and tty are taken from the output of who.
  4. To send all currently logged on users a message, type wall and proceed as with write. Be careful, since this is pretty noisy. Use sparingly.
  5. If you want to disable / control how you get the contents of write and wall commands read man mesg.

Listing info about yourself

  1. To find out about another user, type finger USERNAME.
  2. The last part of the output of finger is the contents of the user's .plan file.

    The .plan file is a free form text document. You can use it as a place to say a bit about yourself, your plans on, or anything else that you feel like. Go nuts.

  3. To edit your plan file, type nano ~/.plan.

William Shotts Quote
2016-02-10-3 at 23h
Graphical user interfaces (GUIs) are helpful for many tasks, but they are not good for all tasks. I have long felt that most computers today are not powered by electricity. They instead seem to be powered by the "pumping" motion of the mouse! Computers were supposed to free us from manual labor, but how many times have you performed some task you felt sure the computer should be able to do but you ended up doing the work yourself by tediously working the mouse? Pointing and clicking, pointing and clicking.

I once heard an author say that when you are a child you use a computer by looking at the pictures. When you grow up, you learn to read and write. Welcome to Computer Literacy 101. Now let's get to work.

William Shotts --

generated by bake