notes about computers by ~pgadey rss
soc
2024-03-01-5 at 19h

tk: add code for * add new entry * edit last entry * re-build

This is a re-write of soc.sh script that I've used to generate my micro-blog since December 22nd 2014. Why fix something that isn't broken? I'm re-writing it to generate RSS, and fix up some weirdness in the original soc.sh that I've been using forever. Also, I want to write something in Dave Gauer's RubyLit framework for literate programming.

A major constraint of this re-write is that I don't want to change the current soc file format at all. I don't want to have to go back, and parse the ~300 currently existing entries and munge everything in to som new format.

It was nice to read the RSS Spec to learn enough RSS to make this.

The plan is to re-write the whole thing as a new bash script.

<<Bash Headers>>

<<Posting Logic>>

<<Create the HTML Header>>
<<Create the RSS Header>>

<<The Main Loop>>

<<Create the RSS Footer>>
<<Create the HTML Footer>>

Bash Headers

This version of soc.sh is going to be another bash script. It'll be a hot mess of code, and we need to initialize some stuff.

#!/bin/bash

Extension="soc"
InputDir="/home/pgadey/Work/Programming/soc-rewrite/input"
OutputDir="/home/pgadey/Work/Programming/soc-rewrite/output"

IndexWebPath="https://ctrl-c.club/~pgadey/soc/index.html"

DATE=$(date +%F)
TIME=$(date +%R)

Posting Logic

#echo $#; # the number of arguments?

case "$1" in

-e|--edit) vim $InputDir/$DATE.soc;;

-b|--build) echo "You requested build." ;;

) echo "

  • [$TIME] $
  • " >> $InputDir/$DATE.$Extension ;;

    esac

    The Main Loop

    The generator loops through all the soc files in the input directory. For each one, it will create a bit of HTML for the webpage and RSS for the feed.

    The generator loops through all the soc files in the input directory. I am going to use a bad bash programming style to loop over these.

    for file in $(ls -r $InputDir/*.$Extension); do
    

    This is not great, because the output of ls is fragile. It there are spaces, or weird characters, everything could blow up. The way that you're supposed to do things is:

    #for file in $InputDir/*.$Extension; do
    

    However, we know the format of the $InputDir. It's a bunch of files with names like: 2024-03-01.soc.

    BASENAME=$(basename --suffix=.$Extension $file);
    <<Create HTML for Each Date>>
    <<Create RSS for Each Date>>
    done
    

    Create the HTML Header

    This whole static micro-blog generator is a hot mess of here-docs. We're just going to dump the headers into an index.html file.

    cat >$OutputDir/index.html <<EOF
    

    Notice that the proceeding line cat >$OutputDir/index.html <<EOF will overwrite index.html. This is intentional. It means that every time soc.sh is run, it will create a fresh index.html.

    <!DOCTYPE html>
    
    <html>
    <head>
    <title>~pgadey's micro-blog</title>
    <meta charset="UTF-8" />
    <link rel="stylesheet" type="text/css" href="screen.css">
    </head>
    <body>
    <h1>~pgadey's &micro;-blog</h1>
    <p>
    <a href="http://ctrl-c.club/~pgadey/">home</a>
    <a href="#about">about</a>
    <!--<a href="index.xml">rssπŸ“‘</a>-->
    <a href="index.xml">rss <img src="rss.svg" alt="rss feed icon" width="15" style="width: 15px; top: 3px; position: relative;"></a>
    </p>
    <hr>
    

    This completes the header, so we close out the here-doc.

    EOF
    

    Create HTML for Each Date

    The naming convention for soc files is that all the posts for a date get munged together in a single file, one per line. For example, all the entries for March 1st 2024 will get put in 2024-03-01.soc. And so, we can figure out the date of a post by looking at its $BASENAME.

    PostDATE=$BASENAME;
    echo "<h2><a id=\"$PostDATE\" href=\"#$PostDATE\">$PostDATE</a></h2>" >> $OutputDir/index.html
    

    The entries in a soc file are made by appending them one-by-one. And so, if we print them as they appear in the soc file then they'll appear in chronological order within each date. This would create a jumpy reading experience. For example:

    • 2024-03-02
      • Morning
      • Afternoon
      • Evening
    • 2024-03-01
      • Morning
      • Afternoon
      • Evening

    And so, we reverse the order of the entries in each soc file using sort -r and get something like this:

    • 2024-03-02
      • Evening
      • Afternoon
      • Morning
    • 2024-03-01
      • Evening
      • Afternoon
      • Morning

    This puts them in reverse chronological order with the most recent entry appearing at the top, and everything following monotonically back to the first post.

    echo "<ul>" >> $OutputDir/index.html
    sort -r $file >> $OutputDir/index.html
    echo "</ul>" >> $OutputDir/index.html
    

    (To be honest: I'm not sure what is "chronological order", and what is "reverse chronological order".)

    Create RSS for Each Date

    Now we make an <item> in the RSS feed.

    echo "<item>" >> $OutputDir/index.xml
    

    According to the RSS 2.0 Spec, every <item> must have at least one of: title or description. One of the annoying things about the soc format is that there is no good way to extract a title as each file is just a snippet of HTML. As such, I'm choosing (and this is lame) to make everything "micro-blog post". I think that this is just a bit better than leaving them blank.

    echo "<title>micro-blog post</title>" >> $OutputDir/index.xml
    

    My RSS reader of choice (newsboat) defaults to using a snippet of the link as a title for posts that lack titles. This doesn't read especially well as you get stuff like Index.html#2024 02 25 for a title... Not great. I'll stick with "micro-blog post".

    According to the RSS 2.0 Spec, all the dates must be in RFC822 format. The date commands --rfc-email which outputs date and time in RFC5322 which supercedes RFC822.

    echo "<pubDate>$(date --date="$PostDATE" --rfc-email)</pubDate>" >> $OutputDir/index.xml
    

    We put a link for the item. (We could also put a <guid> which specifies the globally unique id for the post, but I think that that is not needed in this use case.)

    echo "<link>" >> $OutputDir/index.xml
    echo "$IndexWebPath#$PostDATE" >> $OutputDir/index.xml
    echo "</link>" >> $OutputDir/index.xml
    

    We want to generate a somewhat useful description by stripping all the HTML tags from the file. We do this in a somewhat brutal way using sed. The plan is to remove anything between matching triangular brackets and hope for the best.

    echo "<description>" >> $OutputDir/index.xml
    sed 's/<[^>]*>//g' $file >> $OutputDir/index.xml
    echo  "</description>" >> $OutputDir/index.xml
    
    echo "</item>" >> $OutputDir/index.xml
    

    Create the HTML Footer

    cat >>$OutputDir/index.html <<EOF
    <hr>
    <p id="about">
    Stats:
    EOF
    
    echo $(ls -1 $InputDir/*.$Extension | wc -l) "days with records." >> $OutputDir/index.html
    echo $(wc -c $InputDir/*.$Extension | tail -n1) " characters." >> $OutputDir/index.html
    
    cat >>$OutputDir/index.html <<EOF
    This page was generated by a modified version of <a href="https://tilde.center/~papa/soc.html">soc</a> written by papa@tilde.center.
    For details about the modification, check out <a href="https://ctrl-c.club/~pgadey/notes/computers/#soc">my write-up</a>.
    </p>
    <div id="sitelink" style="width=100%;text-align:center;">
    <a href="https://ctrl-c.club/"><em>ctrl-c.club</em></a><br>
    </div>
    </body>
    </html>
    EOF
    

    Create the RSS Header

    As we did with the HTML header, we're going to overwrite any existing index.xml.

    cat >$OutputDir/index.xml <<EOF
    <?xml version="1.0" encoding="utf-8" standalone="yes"?>
    <rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
    <title>~pgadey's micro-blog</title>
    <link>http:s//ctrl-c.club/~pgadey/</link>
    <description>Recent content on ~pgadey's micro-blog</description>
    <generator>the updated soc.sh script</generator>
    <language>en-us</language>
    <atom:link href="https://ctrl-c.club/~pgadey/index.xml" rel="self" type="application/rss+xml"/>
    EOF
    

    One annoying bit of the RSS 2.0 specification is that it requires dates in RFC822 format. One can hack them together using date via: date +%a,\ %d\ %b\ %Y\ %T\ %z. (Later on, I abandon this idea an use RFC5322.)

    # RSS requires dates in RFC822 format.
    echo "<lastBuildDate>$(date +%a,\ %d\ %b\ %Y\ %T\ %z)</lastBuildDate>" >> $OutputDir/index.xml
    

    However, it seems that this is handled by date --rfc-email.

    Create the RSS Footer

    cat >>$OutputDir/index.xml <<EOF
    </channel>
    </rss>
    EOF
    

    Shell Script

    #!/bin/bash

    rubylit.rb soc soc.sh

    sed --in-place 's/^ //g' soc.sh # get rid of line initial spaces (needed for EOF to work)

    rubylit.rb soc make.sh "Shell Script" rubylit.rb soc ./output/screen.css "The Stylesheet" rubylit.rb soc ./output/rss.svg "Dave's RSS Icon" rubylit.rb soc OLD-soc.sh "The Old SOC Micro-blog Generator"

    scp soc.md ctrl-c.club:/home/pgadey/public_html/notes/computers/soc.markdown ssh ctrl-c.club "meta-bake"

    ./soc.sh --build scp ./output/* ctrl-c.club:/home/pgadey/public_html/soc-test/

    The Old SOC Micro-blog Generator

    #!/bin/sh
    
    # soc - Stream Of Consciousness mini-logger
    # Copyright 2014 David Meyer <papa@sdf.org> +JMJ
    # grabbed from : https://tilde.center/~papa/soc.html
    
    # Modified by pgadey to include:
    #   -- total entry count
    #   -- anchors to all dates
    
    # to do:
    #   -- split in to static blog generator and post creator
    
    socdir=/home/pgadey/public_html/soc
    socext=soc
    outfile=/home/pgadey/public_html/soc/index.html
    
    BASENAME=/usr/bin/basename
    CAT=/bin/cat
    DATE=/bin/date
    ECHO=echo
    FORTUNE=/usr/games/fortune
    LS=ls
    SORT=/usr/bin/sort
    
    date=$($DATE +%F)
    time=$($DATE +%R)
    
    $ECHO "<li><a id=\"$date-$time\" href=\"https://ctrl-c.club/~pgadey/soc/index.html#$date-$time\">[$time]</a> $* </li>" >>$socdir/$date.$socext
    
    $CAT >$outfile <<EOF
    <!DOCTYPE html>
    
    <html>
    <head>
    <title>~pgadey's micro-blog</title>
    <meta charset="UTF-8" />
    <link rel="stylesheet" type="text/css" href="./screen.css">
    </head>
    <body>
    <h1>~pgadey's &micro;-blog</h1>
    <p>
    <a href="http://ctrl-c.club/~pgadey/">home</a>
    <a href="#about">about</a>
    </p>
    <hr>
    EOF
    
    for f in $($LS -r $socdir/*.$socext); do
    bn=${f##*/}
    fd=${bn%.*}
    $ECHO "<h2><a id=\"$fd\" href=\"https://ctrl-c.club/~pgadey/soc/index.html#$fd\">$fd</a></h2>" >> $outfile  # add headers with anchors for each date
    $ECHO "<ul>" >>$outfile
    $SORT -r $f >>$outfile
    $ECHO "</ul>" >>$outfile
    done
    
    $CAT >>$outfile <<EOF
    <hr>
    <pre id="stats">
    EOF
    
    #$FORTUNE >>$outfile
    
    echo $(ls -1 $socdir/*.$socext | wc -l) "days with records" >> $outfile
    
    $cat >>$outfile <<eof
    </pre>
    <div id="sitelink" style="width=100%;text-align:center;">
    <a href="/"><em>ctrl-c.club</em></a><br>
    <p><small>this page was generated by a modified version of <a href="https://tilde.center/~papa/soc.html">soc</a> written by <a href="https://tilde.center/~papa/">~papa</a>@<a href="http://tilde.center">tilde.center</a>.</small></p>
    
    
    
    </div>
    
    </body>
    </html>
    eof
    
    cp /home/pgadey/public_html/soc/index.html /home/pgadey/public_html/stream-of-consciousness.html 
    

    Create a New Entry

    tk: this needs some work! Write up a bit more logic.

    # dump all the positional arguments of soc.sh in to a new list item.
    echo "<li><a id=\"$DATE-$TIME\" href=\"#$DATE-$TIME\">[$TIME]</a> $* </li>" >> $InputDir/$DATE.$Extension
    

    Dave's RSS Icon

    Molly's RSS Icon

    <?xml version="1.0" encoding="UTF-8" standalone="no"?>

    The Stylesheet

    body {
    width: 50em;
    margin: 0 auto;
    font-family: Courier New, courier;
    background: black;
    color: #FF7E00;
    }
    
    h1 {
    text-align: left;
    font-size: 1em;
    color: black;
    background: #FF7E00;
    padding: 0 10px;
    }
    
    h2 {
    text-align: left;
    font-size: 1em;
    font-weight: bold;
    padding: 0 10px;
    }
    
    p {
    padding: 0 15px;
    text-indent: 2em;
    }
    
    ol {
    margin-left: 25px;
    }
    
    ol.indented {
    margin-left: 100px;
    }
    
    a {
    color: #FFBF80;
    font-weight: bold;
    text-decoration: none;
    }
    
    a:link {
    }
    
    /* unvisited link */
    a:link {
    color: red;
    }
    
    /* visited link */
    a:visited {
    color: green;
    }
    
    /* mouse over link */
    a:hover {
    color: hotpink;
    }
    
    /* selected link */
    a:active {
    color: blue;
    } 
    
    .footer {
    font-size: smaller;
    font-style: italic;
    }
    
    short shell scripts
    2023-09-26

    One of the joys of using Linux is writing really short shell scripts that do just one thing. They're so short that they barely count as "programming". I think that people are hesitant to share them because they're often so terse and fragile. However, I always love seeing when other people share their little hacks. One nice reference to this sort of stuff is datagubbe's page best of .bashrc. In hopes of encouraging more of sharing this sort of stuff, I'll write up a couple of them.

    Here are some of my hacks, and how they came to be. For the past few months, I have had to do daily physio exercises. They're boring and tedious, and each of them needs to be done for a certain duration each day. To help with the timing, I use ffplay to ring a bell for me. It took a little while to sort out the options to run it optimally, but I wound up with the follow "physio timer".

    sleep 120; 
    ffplay -nodisp -autoexit bell.wav >/dev/null 2>&1; 
    

    Recently, I found myself with a handful of papers to grade quickly. I wanted to spend at most five minutes per paper, so that I could get them done in time. And so, I asked my computer: "Please ring a bell every five minutes, and tell me to move on." Adding a single loop to the "physio timer" made a "paper grading timer".

    while true; 
        do 
            ffplay -nodisp -autoexit bell.wav >/dev/null 2>&1; 
            echo "Keep moving!"; 
            sleep 300; 
        done 
    

    Certainly, someone somewhere has written something really nice for this kind of task. But, I wrote something quick and dirty and that put a smile on my face.

    Here is another one. Recently, I came across a hacker through the merveilles webring who listed their e-mail as a base64 encoded string together with a one-liner to decode it. (I'm really sorry, but I've forgetten their name and e-mail. If anyone knows this person, or can track them down, please let me know. I could not find them through Lieu.) I thought that was a really nice idea. It was like they were saying "If you're willing to run this one-liner on your computer, then I trust you and we should chat." This got me thinking how I would produce my own such decode-this-on-your-machine one-liner. Want to encode some text in base64 and then give people a Unix one-liner to decode it? Look no further!

    echo "echo \"$(echo "INSERT YOUR TEXT HERE" | base64)\" | base64 --decode"  
    

    So, there is an encoder that produces decorders. There is a Lewis Carroll / Alice in Wonderland vibe about this hack that I enjoy. Sometimes, these little shell scripts are so small that we just make aliases in .bashrc. There is a humour to this last one. Everytime I use it, I smile.

    alias goodnight="sudo shutdown now"
    

    And so, with this goodnight, I end this little article. If you've got any hacks that you enjoy using, please share them. A great way to contact me is via my ctrl-c.club e-mail. Alternatively, you could write up your little scripting hacks for the zine. I would love to read your article.

    Happy Hacking!

    writing on the alphasmart 3000
    2023-07-27

    A close-up of an AlphaSmart 3000 displaying the words: "The AlphaSmart 3000 is single purpose word processing computer from the early 2000s. In this note, I'll describe how I use my AlphaSmart to write code"

    The AlphaSmart 3000 is single purpose word processing computer from the early 2000s. In this note, I'll describe how I use my AlphaSmart to write code effectively, and how I use my headless server to upload content from the AlphaSmart to ctrl-c.club.

    tl;dr: The AlphaSmart 3000 is neat. If you like retro hardware and writing, then they're well worth the ~50$CAD it costs to buy one off eBay. They're suprisingly versatile and lots of fun.

    The AlphaSmart 3000 has a four row dot matrix LCD display and 200kb of memory spread across eight files. It takes three AA batteries, which I'm told last about four hundred hours to a charge. (I've never had to replace them in the three years that I've used my AlphaSmart.) The way that that the AlphaSmart communicates with a computer is by emulating a USB keyboard. One plugs in the AlphaSmart, hits a Send button, and it manually "types" the contents of a file in to the computer as though it were a keyboard. This functionality lends itself to a nice hack that I'll describe below.

    Writing Effectively

    There are three hacks that I've found helpful on the AlphaSmart: keeping a table of contents, copying common code blocks, and manually generating raw TTY input. The AlphaSmart has eight "files" for storing text. One can copy and paste between the files freely. The search functionality searches all the files in numerical order.

    I noticed that when I use the AlphaSmart after a long pause, I tend to forget which files had which projects or content in them. This led me to keep a "Table of Contents" in the first file. Whenever I turn on the AlphaSmart, I switch to File 1 and look at where everything is.

    File 1 is also the first file to get searched when looking for text. This means that I keep all my re-usable code snippets in there. I tend to write a lot of lecture notes using LaTeX for my work. This requires lots of repetitive code blocks to make frames.

     %% QFRAME %%
     \begin{frame}{TITLE} % (fold)
        \begin{question}
            QUESTION
        \end{question}
    
        \vspace{\stretch{100}}
        %<*solutions>
            \fbox{\parbox{\textwidth}{ 
                SOLUTION    
            }}
        %</solutions>
     \end{frame} % (end)
    

    I store these snippets of code in File 1, and access them using the search function. If I need to add a "question frame" to my lecture notes, I can search for QFRAME and pull up the required code in a few seconds. Some other things that I store in File 1 include: headers for my Hugo site, and a bit of raw TTY input to upload the contents of a file to ctrl-c.club.

    Transferring Content to ctrl-c.club

    It is nice to write offline on the AlphaSmart 3000, but we have all come to expect our devices to have the ability to upload written material to the cloud. I usually write on the AlphaSmart in the basement, which happens to have a headless server in it. One day, it occurred to me that I could use the headless server to upload material from the AlphaSmart to ctrl-c.club.

    Setup the Headless Computer to Start without An X Server

    This is the setup that I used on Ubuntu to make my headless server boot to login prompt. Edit /etc/default/grub with your favourite editor, e.g. nano:

    sudo nano /etc/default/grub

    Find this line:

    GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"

    Change it to:

    GRUB_CMDLINE_LINUX_DEFAULT="text"

    Update GRUB:

    sudo update-grub

    Send The Text to the Headless Computer

    In File 1, I have the following bunch of raw TTY input. It creates a file, opens it in ed, and dumps a bunch of raw text, writes the file, quits, and uploads it to ctrl-c.club. The LOCAL-USERNAME is my username on my headless server, and USERNAME is my username on ctrl-c.club. (In my case, these happen to be identical.)

    LOCAL-USERNAME
    LOCAL-PASSWORD
    ALPHASMART="alphasmart-$(date --iso=second).txt"
    touch ~/$ALPHASMART
    ed ~/$ALPHASMART
    a
    This is some text from the AlphaSmart
    You can include all sorts of stuff here.
    Except, of course, a line containing a single period.
    .
    w
    q
    ⏎
    ⏎
    scp $ALPHASMART ctrl-c.club:/home/USERNAME/
    exit
    ⏎
    

    One could really go nuts with this idea. I've thought of adding bells and whistles to notify me that everything was a success. If you play with these hacks, or even if you don't, please let me know! Thanks for reading.

    ~pgadey

    An AlphaSmart 3000 sitting on a standing desk in a boiler room

    Journal Update
    2022-01-02-0 at 15h

    This note is an update on the note that I wrote back in 2016-08-22.

    In 2013, I started a system for augmenting my paper journal with a computer-searchable index. In this note, I'll describe how my system works, what tools I use to interact with with it, some of the unexpected emergent complexity of the indexing system, and what I'd like to do with it next.

    Full disclosure: The computer index is of questionable utility. I do not refer to things in the index very often in my day-to-day life. The most common use of the index is looking up all the references to someone who has recently died. I'm a member of a church with an aging congregation, and so it happens two or three times a year that I need to look someone up. Usually, the most active engagement with the index is during the winter holidays when I re-read the year's journals. That holiday tradition, in itself, justifies keeping an index.

    I started to keep a running index of my journal in graduate school. At the time, I was interested in board games and admired the work of Sid Sackson. In a magazine article about Sid, I found out that he kept a game development journal where he logged all the game-related activity in his life. Anything that seemed important was written in upper-case letters. At the end of each year, he manually compiled an index to that year's volume of the journal and would add a dot beside uppercase in the journal that made it to the index. Sid Sackson's diary has been scanned and put online by the Museum of Play. It is definitely worth checking out!

    The Indexer

    My method of indexing is similar to Sid Sackson's method, except that I use a computer to handle the compilation. The indexer is about fifty lines of Perl code. It takes a directory full of plaintext files, one per volume, and returns an index of the whole journal. For each volume, I make a plaintext file (called a date file) which lists the subjects mentioned on each date using the following bare-bones format:

        2017-09-21:
            [07:45]
            @HartHouse
            Shoulder problem
            Fisherman's Friend
        2017-09-23:
            @TaddleCreekPark
            [16:15]
            Dagmar Rajagopal
            Dagmar Rajagopal's memorial Meeting
    

    The indexer then converts this information in to an index which lists where each subject appears in the journal. For example, the entry for @HartHouse reads:

        @HartHouse - 2017-09-21, 2017-09-27, 2017-10-02, 2017-10-04, 2017-10-13,
        2017-10-18, 2017-10-30, 2018-02-09, 2018-04-04, 2018-04-06, 2018-09-04
    

    The indexer also prints some statistics about the index:

     Indexing all volumes.
     number of volumes
     40
     days with entries
     1602
     number of references
     17021 ./date.all
     distinct subjects referenced
     7237 ./subject.all
    

    Emergent Structure

    The plaintext format of the index has almost no structure. It does not contain any kind of markup to denote what each line means. This is a weakness and a strength of the system. The indexer doesn't know anything about what each line means, but I am free to create any kind of structures that I want with plaintext.

    The very first date file begins:

        2013-10-24:
            FRM
            FillRad
            FillVol
            Algebraic Geometry
        2013-10-25:
            Quaker
            Bill Taber
            Flat Surfaces
            PGG Seminar
            Marcin Kotowski
            Matt Sourisseau
            Design a computer from scratch
            Sam Chapin
            Tyler Holden
    

    These entries contain a mix of things: some math topics, a religion, an author, some more math topics, some friends, a project name, and some more friends. At this early stage, the index did not have any clear guidelines for formatting subjects, or what to include and exclude.

        2015-11-06:
            MAT 246
            Tyler Holden
            UoT
            Sam Chapin
            Linux
            Canada
            Mathematics in Canada
            [22:00]
    

    About two years later, on 2015-11-06, the first entry with a timestamp appears. The timestamps are written in the date files as [HH:MM]. The left bracket sets them apart in the index, and they all sort to a contiguous block of entries.

        2016-06-04:
            Elizabeth Block
            Sylvia Grady
            Mark Ebden
            Camp NeeKauNis
            @NeeKauNis
    

    On 2016-06-04, another structure emerged. Entries after this point typically include both an [HH:MM] timestamp and a @Location tag. Each entry in the (physical) journal typically begins with a line like:

     @HeronPark 2021 XII 20 II [21:35]
    

    This is very far from the One True Date Format (ISO 8601). Allow me to explain. This date format is a mash-up of idiosyncracies. I learned about representing months using Roman numerals from Marcin and MichaΕ‚ Kotowski when they participated in the Probability, Geometry, and Groups Seminar. According to Wikipedia, this is usage of roman numerals for months is still common in Poland. Prior to learning about this convention, I wrote dates as 2021/12/20 which seems visually busy and homogenous. The Roman numerals break things up a bit. Also, it seems fitting to use Roman numerals for months.

    The second Roman numeral in the date stamp is the day of the week. In the Quaker calendar, Sunday is the first day of the week. Sometimes, I have heard contemporary Quakers using ordinals for day names, but it is quite rare to hear anything other than "First Day".

     DIAGRAM - 2015-04-13, 2015-05-29, ...,  2020-12-28, 2021-02-02
    

    There are some indexing conventions that I adopted early on and that I don't like very much. The worst is probably the convention for writing DIAGRAM when a diagram appears in an entry. There are ~150 entries with diagrams, and I have no idea what any of the diagrams represent. Someday, I might go back and track them all down, but it will be a lot of work. (At least I know where to look!)

    The upper case convention for tagging information is not great. The intent was to have a mechanism for indexing structural things such as DIAGRAM, LIST, CALCULATION. Each of the these labels should have some more description added to it. The issue is that they don't sort to any particular place in the final index and so they are hard to track down. One needs to know, in advance, all the possible upper case subjects to find anything.

    Brokhos - 2018-07-26
    Brokhos (SF) - 2019-06-06
    Brokhos (sf) - 2018-06-24, 2018-02-03, 2018-03-04, 2018-07-31
    

    There are also things where the correct convention is slowly emerging. The three subjects above are supposed to refer to the string figure called Brokhos. I'm not sure if appending things like (sf) or (SF) is helpful. It is easy enough to look for all the string figures in the index by looking for (sf) or (SF). Perhaps a more useful convention would be "String Figure: Brokhos". Whenever I have looked up a string figure from the index, I just used its name and so have not needed the tag in brackets.

    This project continues to grow and evolve. Some conventions have stabilized and are very helpful. Every year, new conventions crop-up. I have not (yet) gone back and revised the index for consistency, so there is a hodge-podge of competing conventions. This is not project for publication, but is an on-going exploration of writing and journaling.

    Managing these files with vim

    The workflow surrounding the index is vim based. I run vim ~/Work/Journal/date/date.* to open up all the date files in different buffers. This makes all the material in the index available when I want to start indexing a new volume.

    The format for date files essentially uses one line per subject. Vim supports whole line completion using CTRL-X CTRL-L which pull possible line completions from all buffers. (For details, see: :help compl-whole-line.) This makes completing complicated names like "Ivan Khatchatourian" straightforward.

    As I'm writing a date file, I keep the current date in the register "d (for date). While putting together the date file for the volume containing December 2021, I'll keep 2021-12-01: in the "d buffer.

    So, the vim workflow looks like:

    • Paste in a date.
    • Modify it appropriately.
    • Use whole line completion to add new subjects
    • Repeat.

    Pen and Paper

    This write-up wouldn't be complete without saying a little bit about the physical side of the journals too. I've used hard bound 4x6" sketchbooks since the index started. They are absolutely indestructible, neither too big nor too small, and quite cheap. Another notable feature is that you can find them at any art supply shop.

    I write with a Kaweco Sport Brass fountain pen using J. Herbin Lierre Sauvage ink. The brass pen has a nice weight to it. It's a pen that's hard to misplace and no on has walked away with it. I use it for all my writing because ball point pens severely aggravate my tennis elbow.

    Originally, I got all my books from Toose Art Supplies because they were across the street from the math department. Now, I tend to get things from Midoco:

    Closing Note

     2021-06-11:
        @HeronPark
        [15:30]
        "Your diary is an on-going and growing project. You are free to alter, change and experiment with it as you wish. Whoever receives it will make of it whatever they make of it. Strike out and explore. Or, dream the same old dreams. This is your place; enjoy it!"
    

    Contact Me About This

    There do not seem to be many people doing this sort of thing. The only examples that I know are Soren Bjornstad and Dave Gauer. If you're using computers to index your personal journal, or are interested in doing so, I would love to get in contact with you.

    Journal
    2016-08-22

    I have kept a hand written journal since I was a kid. The old lady we went to the theater with told me to keep a journal. Her advice stuck with me. That moment, in Aunt Kay's hallway, was a life changing experience for me.

    Today was another milestone in journalling for me. Today, I carefully re-read the last several volumes of my handwritten journal looking for underlined passage which represent subject headings. These underlined key words serve to provide a series of "hyper links" within the hand written journal. They make a paper book about as useful as an online tool. This is the solution that I've adopted to the computer geek's dilemma:

    Should I keep a blog or a hand written journal?

    My answer is: Keep a hand written journal with a thorough index. You can consult your notes using the index, and this will allow you to "grep dead trees". Most journal entries that I write are personal, semi-private, matters. Writing with a pen on paper allows me to "keep part of my life offline".

    Hand writing notes allows a flexibility of description and illustration that I find impossible to get with a computer. It is too difficult, for me at least, to make drawings or type math quickly on computers. The interface of the computer gets in the way. To put it plainly -- writing on paper is relaxing compared to writing on a computer.

    Computers were made for tabulating indices.

    Paper, pen, and notebook work well together.

    Pen, paper, notebook, and computer generated index work perfectly together.

    Today, I hunted down the underlined subject keywords and carefully stowed them away in plain text files. Once everything was typed in, I had the following epic computing experience:

       #look at the third volume
       $ cat ./date/date.3 
         2014-10-21:
            Morse code
            Python
         2014-10-22:
            Sponge Problem
    
       #look at the local file structure
       $ tree ./
           ./
           β”œβ”€β”€ date
           β”‚Β Β  β”œβ”€β”€ date.01
           β”‚Β Β  β”œβ”€β”€ date.02
           β”‚Β Β  β”œβ”€β”€ date.03
           β”‚Β Β  β”œβ”€β”€ date.04
           β”‚Β Β  β”œβ”€β”€ date.05
           β”‚Β Β  β”œβ”€β”€ date.06
           β”‚Β Β  β”œβ”€β”€ date.07
           β”‚Β Β  β”œβ”€β”€ date.08
           β”‚Β Β  β”œβ”€β”€ date.09
           β”‚Β Β  β”œβ”€β”€ date.10
           β”‚Β Β  β”œβ”€β”€ date.11
           β”‚Β Β  └── date.12
           β”œβ”€β”€ date.all
           β”œβ”€β”€ journal.sh
           β”œβ”€β”€ subject.all
           └── subject-date.pl
    
          1 directory, 16 files
    
       #run the indexer
       $ journal.sh
        Indexing all volumes.
        number of volumes
        12
        days with entries
        513
        number of references
        5050 ./date.all
        distinct subjects referenced
        2267 ./subject.all
    

    "When I was a kid ..."

    This current software setup is a long way from the early journals that I wrote in highschool. Roy MacDonald really helped me get started in journaling. He raised, with his own life, journaling to the level of a vocation. He was called to journal. Allow me to show you a broadsheet poster that Roy wrote:

     Journals Are ...
    
    ... an important way of confronting the confusions of our world and the complexities of life. They are an assertion of our personal worth and individuality.
    
    ... open and available to everyone who can write a few words on paper and to everyone who wishes to consider this experience of living.
    
    ... often written in the heat of the moment, at the scene, and without reflection. They are the record of immediate experience and original feeling. 
    
    ... natural resources which writers may store away for future use in prose or poetry.
    
    ... recordings of developing concepts, attitudes, ideas. They help to review our own progressions, changes, and patterns of behaviour.
    
    ... a source of stimulation for writers and are helpful in overcoming writing blocks. Often the basic recording of specific time and place details can generate other thoughts and recollections which encourage writing.
    
    ... useful in reviewing and reinforcing things we have learned and wish to remember.
    
    ... helpful in keeping us in touch with out ancesotrs and in projecting something of ourselves onward to future generations.
    
    ... miscellanies of things we find meaningful: a series of lines, verses, and quotations encoutered in our daily life.
    
    ... private worlds and secret places of our own where are free to be exactly who we are and to say exactly what we want to say.
    
     Roy N. MacDonald, 1981 
    
     To Parker, in friendship Roy, London Oct 28, 2010
     I wish you good writing and a wonderful life.
    

    I agree with everything Roy wrote, and more. He was the model journaller for me. I think that the importance of a private journal for research was first taught to me by Roy.

    On the other hand, Derek Krickhan models perfectly the private computer journaller. He has a 'fancy typewriter' that we writes all his entries in to it. I warn him, every chance I get, to back them up. No one knows if they ever come out of the fancy typewriter.

    Heru Sharpe got me started on rather "experimental" journalling. He is a hardcore Kabbalist, and takes notes about all sorts of things. I'm sure that there is a lot of fascinating poetry, reflection, and alchemy in his journal. He got me writing about my own "investigations".

    My interests in recreational reading, computer programming, naturalism, indoor gardening, astronomy, foreign languages, and low complexity art all show up under various guises in my journal. There are a lot of low level tricks built in to how I mark my entries. By selecting subject keyphrases carefully, one can emulate tags, categories, timing. Using a pen one can handle multiple written languages, various fonts, math, figures, etc. You can glue in interesting bits of paper.

    The sky is the limit with hand written, well structured, notebooks.

    [2017-12-29]

    Today I put some photos of the setup on Imgur here and posted about it on Reddit here.

    The photos are here for local reference.

    Screencasting With Gimp
    2017-05-30-2 at 19h

    Screencasting from GIMP

    The current setup for screencasting with GIMP:

    • Open up GIMP
    • Open up gtk-recordmydesktop
    • Wacom Tablet (Intuous Pro 5 Small)

    Issues

    • GIMP cannot cycle through pen colours: use the plugin color.py (local mirror)

    To install color.py place it in: /usr/lib/gimp/2.0/plug-ins/color.py and make it executable. Once it is loaded by GIMP, set a key-binding using: Edit>Keyboard Shortcuts

    Current Key-Bindings

    Based on a great post by Bart Van Audenhove, I wrote a script wacom.sh to configure my Wacom tablet using xsetwacom. The script sets up the Wacom tablet in interact with X11.

    • ctrl-` : cycle through foreground colours

    • Wacom Left #1 (Top-top) -- zoom in

    • Wacom Left #3 (Top-bottom) -- zoom out

    • Wacom Left #4 (Bottom-top) -- Colour cycle

    • Wacom Left #5 (Bottom-mid) -- Undo

    dotfiles
    201e-03-01

    stow + git = version controlled dot files

    Today I set up my system to use version controlled dotfiles via stow and git.

    What are 'dotfiles'? They are configuration files that allow one to customize the behaviour of programs on Linux. They specify key-bindings, colour themes, etc. in plain text files. Often they are very personal. It is a little dizzying to use a familiar piece of software without the usual dotfiles in place.

    What is GNU stow? It is a symlink manager that allows you to deploy and remove collections of symlinks conveniently. One creates several "packages" in directories, and then stow manages the task of creating or removing symlinks to the various files in these packages.

    What is git? Git is Linus Torvald's other wunderkind. It is a version control system that tracks how files have been modified. Presently, it is the industry standard.

    Example stow Usage

    stow manages packages of files in the following way: A package is just a directory of files. When you stow a package, it will create symlinks to all the files in the package together with the appropriate file hierarchy.

    For example, suppose you have the following file structure:

     ~/dotfiles/foo
     β”œβ”€β”€ .foorc
     └── .config
       Β  └── foo-config
    
     ~/dotfiles/bar/
     β”œβ”€β”€ .bar.ini
     └── .config
         └── bar
             β”œβ”€β”€ bar-config
             └── bar.theme
    

    That is, you've got a dotfiles directory containing two packages foo and bar. Notice that both packages contain directories called .config. These directories allow you to seperate out the parts of the package foo that go in to ~/.config and the parts of bar that go in to ~/.config

    If you enter the directory ~/dotfiles/ and run stow foo it will create symlinks at ~/.foorc and ~/.config/foo-config which point to the corresponding files in your dotfiles directory.

    If you enter the directory ~/dotfiles/ and run stow bar it will create symlinks at ~/.bar.ini and ~/.config/bar/bar-config and ~/.config/bar/bar.theme which point to the corresponding files in your dotfiles directory.

    (By default stow installs the package the directory containing the current working directory. One can change this using stow -t TARGET.)

    Managing Dotfiles with Stow and Git

    I followed the advice these people put up:

    To get started:

    • Make a ~/dotfiles/ directory.
    • Initialize a git repo in that directory using git init.
    • Make a sub-directory for each package of configuration files you want to track. E.g: vim, screen, mc.
    • Copy the configuration files you want to track in to each ~/dotfiles/package/ directory.
    • Run stow --adopt package to adopt current dotfiles for each package.
    • Run git add . to add all the new file to the git repo.
    • Run git commit to commit the new files.
    • Run stow package to stow each package.

    To maintain your repo:

    • Everytime you change a dotfile, update the git repo using git add .changed-dotfile
    • Commit when you feel like it.

    Other useful things:

    • stow -D package removes the symlinks to a package
    • stow -R package to reload package. It removes, and then re-stows it.

    ssh and vim

    Two things that hose dotfiles I'd like to track are ssh and vim. Unfortunately, ~/.ssh/ and ~/.vim/ both contain sensitive data. One has my private keys, and the other has temporary files related to potentially sensitive documents.

    Thus, I only track the relevant non-sensitive data in ~/dotfiles/

     /home/pgadey/dotfiles/ssh/:
        .ssh
    
     /home/pgadey/dotfiles/ssh/.ssh:
        config
        known_hosts
    
     /home/pgadey/dotfiles/vim/:
        .vim
        .vimrc
        snippets
        plugin
    
     /home/pgadey/dotfiles/vim/.vim/plugin:
        emmet.vim
        matchit.vim
        snipMate.vim
        surround.vim
    
     /home/pgadey/dotfiles/vim/.vim/snippets:
        tex.snippets
        vim.snippets
    

    tl;dr

     $ mkdir ~/dotfiles/
    
     # setup the package of dotfiles for foo
     $ mkdir ~/dotfiles/foo/
     $ cp ~/.foo ~/dotfiles/foo/
     $ mkdir ~/dotfiles/foo/config/
     $ cp ~/.config/foo-config ~/dotfiles/foo/.config/foo-config
    
     # adopt the currently existing files for the package foo
     $ cd ~/dotfiles/
     $ stow --adopt foo
    
     # create symlinks for the package foo
     $ stow foo
    
    quizbot
    2016-10-22-6 at 16h

    GOAL:

    Generate a quiz for each TA
    

    INPUT:

    Description of the course and quiz to be generated
        A31,6,Quiz \#6 on Limits
        TUTO01-Alice
        TUT002-Bob
    
        The first line contains, in CSV,
        COURSE,QUIZ NUMBER, QUIZ TITLE
    
        All other lines contain text which will used to fill TUTORIAL
    
    A folder of files:
        ./course.csv
        ./template-head.tex
        ./template-foot.tex
        ./question-1/variant-1.tex
        ./question-1/variant-2.tex
        ./question-1/variant-3.tex
        ./question-2/variant-a.tex
        ./question-2/variant-b.tex
        ./question-3/foo.tex
        ./question-3/bar.tex
    

    OUTPUT:

    A bunch of quizzes
        ./quizzes/A31-Quiz-6-TUT001-Alice.tex
        ./quizzes/A31-Quiz-6-TUT002-Bob.tex
        ./quizzes/COURSE-Quiz-N-TUTORIAL.tex
    

    Example:

    Very simple example of this set up is available here: quizbot-example.zip

    $ cat ./course.cv

    A31,6,Quiz \#6 on Limits
    TUT001-Alice
    TUT002-Bob
    

    $ cat ./template-head.tex

    \title{\QuizBotCourse -- Quiz \QuizBotNumber -- \QuizBotTitle}
    
    \documentclass[12pt]{article}
    
    \begin{document}
    \maketitle
    

    $ cat ./template-foot.tex

    \end{document}
    This is never printed
    

    $ quizbot.sh

    generating : A31-Quiz-6-TUT001-Alice
    generating : A31-Quiz-6-TUT002-Bob
    

    $ cat ./quizzes/A31-Quiz-6-TUT001-Alice.tex

    \newcommand{\QuizBotTitle}{Quiz \#6 on Limits}
    \newcommand{\QuizBotNumber}{6}
    \newcommand{\QuizBotCourse}{A31}
    \newcommand{\QuizBotTutorial}{TUT001-Alice}
    \title{\QuizBotCourse -- Quiz \QuizBotNumber -- \QuizBotTitle}
    
    \documentclass[12pt]{article}
    
    \begin{document}
    \maketitle
    %% begin:  ./question-1/variant-1.tex
    What is $\pi$
    %% end:  ./question-1/variant-1.tex 
    
    %% begin:  ./question-2/variant-a.tex
    \[ \cos(a+b) \]
    %% end:  ./question-2/variant-a.tex 
    
    %% begin:  ./question-3/bar.tex
    Aloha
    %% end:  ./question-3/bar.tex 
    
    \end{document}
    This is never printed
    
    office cam
    2016-09-14

    This is the current implementation of cam.sh. It takes a photo of the whiteboard, uploads it, and then processes it on cloudbox. The processing is pretty minor: it sharpens the image and makes a thumbnail.

     #!/bin/bash
    
     server='cloudbox'
     localStore='/home/pgadey/cam'
     remoteStore='/home/pgadey/public_html/office/cam'
     fileName=`date +%F+%T`.jpeg
    
    
     streamer -o $localStore/$fileName -j 100 -s 1280x720
     scp $localStore/$fileName $server:$remoteStore/$fileName
    
     ssh $server "mogrify -verbose -sharpen 0x1.5 $remoteStore/$fileName"
     ssh $server "mogrify -verbose -thumbnail 127x72 -path $remoteStore/thumbs/ $remoteStore/$fileName"
    

    It would be nice to add some more functionality to cam.sh:

    • An argument for comments about the shot
    • An argument for naming the shot

    The photos are available here: http://pgadey.ca/office/cam

    office
    2016-09-10

    dotfiles

    meta-notebook manager

    • definitions

      • a page is a pdf of a single page of paper (US-letter)
      • a commentary is a set of human readable plaintext files
      • a note is a directory containing pages and a commentary
    • what is a commentary?

      • the human readable part of the note
      • it at least has a title and some kind of a type: is this note a chapter? a course?
      • some tags
      • a bunch of tiny text files
        • project details
        • progress report stuff
        • code
        • details about pages
        • it might even have some latex to compile
        • markdown blog entry / discussion
    • to assemble a note (e.g. foo) do several things:

      • assemble the commentary
        • if there is no commentary: make a minimal one with date of assembly
        • if there is a commentary:
          • log changes to commentary using git
          • put together the html version (use bake)
      • assemble the pages in pdf format
        • create a pdf of all the pages: foo.pdf
        • create a compressed pdf of all the pages: foo-tiny.pdf
      • assemble a simple web gallery:
        • create png versions of the pages
        • create png-thumbnail versions of the pages
        • create an html gallery of the png images: include the commentary
    • a sub-note is a directory inside a note

      • RECURSE!
      • the commentary of a note can describe how to assemble its sub-notes
      • thread-view of notes

    whiteboard photos

    • single press button which takes a photo of whiteboard
    • puts the photo in a gallery together with minimal commentary (date)
    • the commentary allows photos to be tagged
    • name files (yyyy-mm-dd-N.png ?)
    • generate a very simple "permalink" to photo (md5 the photo or passphrase style?)

    potential alternative use: simple use document scanner, book cataloguer

    timelapse garden

    • it suffices to consider a single plant
    • automagically photograph a plant using a webcam every N seconds (timelapse)
    • allow for special frames or snapshots which illustrate something
    • stores the photos in a reasonable place related to the plant
    • name files (yyyy-mm-dd-hh:mm:ss.png -- O Time Thy Pyramids!)
    • allow for a commentary about the plant

    • to assemble a plant timelapse:

      • make a variety of scales of timelapse: 100x 10x 1x
      • make several different formats
      • generate a simple web page for the plant
      • specific hours of activity?
    • potential alternative use: weather station

    poisson pings (time usage monitor)

    • pings according to a low intensity markov process
    • audible/visible chime
    • asks for tags describing current mental head space
    • what tasks are being done in the office, guests, students, etc
    • timelapse photo with face camera
    • timelapse photo with window manager screenshot
    • may give pleasant fortune?
    • specific hours of activity?

    noise generator

    future

    • lending library
    • real weather station
    • morse code whistling droid

      Perhaps plants, courses, articles, studies, etc. are all notes. A meta-notebook note has unique plaintext named images with individual commentaries, and a comment on the note as a whole.

    Computers and Classrooms
    2016-08-17-3 at 01h
    • Ursula Franklin

      • Holistic/Prescriptive
      • The Sense of the Classroom
      • Handbooks and Textbooks
      • Computers in Classrooms
      • Classrooms in Computers
    • Effective Scales:

      • 1 -- Tutoring
      • 2 -- Conversation
      • 15 -- Small Class
      • 100 -- Class
      • 100,000,000 -- Proper Class
    • MAT 237

      • Cryptographic game-ification of "content delivery"
      • Large and provably true promises to students (50% Big List)
      • Slides, Problems, Slides
      • Statistics of responses (Shannon entropy)
      • The Good:
        • The asymptotic analysis of information transfer argument
          • Chalk-and-Talk: one-to-n communication: O(n) information
          • Multiple-Choice-Split-and-Chat: (1/2 n)(1/2 n) = O(n^2) information
        • Core group of students who accepted the method.
        • Students worked very hard on the Big List.
        • Great office hours.
        • The prep work is much more fun.
      • The Bad:
        • Voter apathy.
        • Please vote. Voter apathy.
        • Students give up on parts of the gamification they do not enjoy.
        • Voter apathy. Do -- um -- something please.
        • Lectures are even more boring to deliver.
        • Epsilon attendance in morning lectures.
        • Students attempted, and failed, to meta-game the Big List.
      • The Ugly:
        • We pass over the Ugly.
      • Room for Improvement:
        • Enforced voting.
        • Enforced group discussion and completion of worksheets.
        • Statistics of responses (flashcard scheduling, Hermann Ebbinghaus 1885)
          • ``We need more data!''
          • ``Wir mΓΌssen wissen, wir werden wissen!" -- Hilbert
    meta bake
    2016-07-01

    I've been using bake to maintain this collection of notes.

    To make things a little bit better, I've written a little script that I call meta-bake to bake all the subdirectories of my home folder that should be baked. This means that every instance folder in my notes directory gets hit.

    This might be useful for other people using multiple bake blogs on ctrl-c.club

    #!/bin/bash
    find . -name 'bakefile' -execdir bake {} +;
    find . -name 'bakefile' -execdir git commit -m meta-bake {} +
    
    Plant Cam
    2016-07-01

    I have a webcam that I use to make timelapse videos of plants growing. There is also a lamp on a timer to keep the plant well lit through the night.

    This is a local link to the cam.

    Hardware:

    1. RaspberryPi (Thanks, Nick!)
    2. Lamp and timer
    3. Microsoft LifeCam HD-3000 (Res: 1280 x 720)

    The first shot (2016-04-22):

    The most recent shot (2016-07-01):

    The data dump from the recent look at the hard drive.

     2016-07-01
    
     pgadey@raspberrypi /media/backup/webcam $ time du -hc | tail
     27G     .
     27G     total
    
     real    44m8.222s
     user    0m13.440s
     sys     1m13.520s
     pgadey@raspberrypi /media/backup/webcam $ ls | tail
    
     998-20160626111303-00.jpg
     998-20160626111304-00.jpg
     998-20160626111305-00.jpg
     998-20160626111306-00.jpg
     998-20160626111400-snapshot.jpg
     999-20160626111446-00.jpg
     999-20160626111446-01.jpg
     999-20160626111446.swf
     999-20160626111500-snapshot.jpg
     lastsnap.jpg
    
    Backup System
    2016-04-22-5 at 23h

    I am trying to set up a home back up system that I like. The goal is to have a place for storing my pdf library and pictures safely. Additionally, it would be nice to have music and videos conveniently accessible.

    Presently, the plan is to have a central respository of stuff on an external hard drive attached to a RPi. I'm going to use sshfs to connect it all up.

    Backup stuff:

    1. RaspberryPi (Thanks, Nick!)
    2. Seagate 1.5 Tb Expansion Portable (Model: SRD0NF1)
    3. Kingston 16Gb DataTraveler SE9
    sync
    2016-02-10-3 at 23h

    Automagic uploading with rsync and ssh sshfs

    This page documents the brief script that I use for working with various remote servers using sshfs. The magic of sshfs makes working with remote file systems feel exactly like working with the local file system. For instance, it makes working with ctrl-c.club and my server, cloudbox, almost effortless.

    The script below assumes the following set up: A local directory ~/sshfs/ with one folder per remote location, named after the host alias for that remote location as specified by ~/.ssh/config. My ~/sshfs/ directory looks like this:

        ~/sshfs/:
            ~/sshfs/cloudbox/
            ~/sshfs/tilde.works/
            ~/sshfs/ctrl-c.club/
    

    and my ~/.ssh/config looks like:

        Host cloudbox
            Hostname pgadey.com
        Host tilde.works
            Hostname tilde.works
        Host ctrl-c.club
            Hostname ctrl-c.club
    

    Further more, it assumes that every remote location has the same set up: One has the same username pgadey on each server and content is kept in remote:/home/pgadey/. Here is the script that I use to push new material to these hosts: (sync.sh)

        #!/bin/bash
    
        # use $sudo umount ~/sshfs/*
        # to unmount everything
    
        LIST="ctrl-c.club cloudbox tilde.works"
    
        for d in $LIST; do
            read -p "Do you want to mount $d (Y/[N])?" answer
            case $answer in
            [Yy]* ) sshfs $d:/home/pgadey/ /home/pgadey/sshfs/$d/;;
            * ) echo "Not mounting $d.";;
            esac
        done
    
    Textual Machines
    2016-02-10-3 at 23h

    Computers are the machines which manipulate text. Other machines might manipulate the physical world for humans benefit, but computers manipulate symbols for our benefit. Most people fault computers for not being 'fast' or 'smart', but computers only seem so because we've asked them to attempt the impossible: we wish them to simulate reality in a way that is pleasing to us.

    If we lowered the bar on what we'll accept for a satisfying computer experience, then we'd all be rich beyond measure in terms of computing resources. As a means of transmitting, displaying, and re-arranging bits we know for a certainty that computers are great. We can have a great deal of fun with a simple network of tiny computers. The trick is to stick to what textual machines are good at doing for us. So, please remember that your computer is really quite good; you're just asking it to do the impossible.

    comm
    2016-02-10-3 at 23h

    Some tips on tools for communication / socializing

    Learn to use screen.

    screen lets you use multiple terminals through one ssh session; screen is very handy for multi-tasking. The most basic functionality is as follows: Run screen, then use the following key combinations. ctrl-a x means press ctrl and a simulataneously then release and type x.

    1. ctrl-a c will open a new terminal.
    2. ctrl-a " will list all open terminals.
    3. ctrl-a A will prompt to rename the current terminal.
    4. ctrl-a K will prompt to kill an active terminal.
    5. ctrl-a ? will bring up a useful cheat sheet.

    As usual, check out man screen for all the details. screen is very helpful for keeping documentation up.

    Chatting

    1. In a new terminal, run watch who to see if anyone is currently on the server. This information will update every two seconds. Each line lists a person who is logged on, what terminal they are connected to, and when they logged on.
    2. To check how long someone has been idle, try finger USERNAME. This will tell you how long their terminal has been idle for.
    3. If someone is online, you can try to communicate with them through write. Try write USERNAME, type out your message, and then finish it by pressing ctrl-d (ctrl-d is interpreted as end of file). If they don't respond you can try write USERNAME TTY where the username and tty are taken from the output of who.
    4. To send all currently logged on users a message, type wall and proceed as with write. Be careful, since this is pretty noisy. Use sparingly.
    5. If you want to disable / control how you get the contents of write and wall commands read man mesg.

    Listing info about yourself

    1. To find out about another user, type finger USERNAME.
    2. The last part of the output of finger is the contents of the user's .plan file.

      The .plan file is a free form text document. You can use it as a place to say a bit about yourself, your plans on ctrl-c.club, or anything else that you feel like. Go nuts.

    3. To edit your plan file, type nano ~/.plan.

    William Shotts Quote
    2016-02-10-3 at 23h
    Graphical user interfaces (GUIs) are helpful for many tasks, but they are not good for all tasks. I have long felt that most computers today are not powered by electricity. They instead seem to be powered by the "pumping" motion of the mouse! Computers were supposed to free us from manual labor, but how many times have you performed some task you felt sure the computer should be able to do but you ended up doing the work yourself by tediously working the mouse? Pointing and clicking, pointing and clicking.
    
    I once heard an author say that when you are a child you use a computer by looking at the pictures. When you grow up, you learn to read and write. Welcome to Computer Literacy 101. Now let's get to work.
    

    William Shotts -- LinxCommand.org

    generated by bake