Automating my publishing system

Posted on Wed 23 August 2017 in Articles • 5 min read

Week before last, two things happened that broke my publishing system. Firstly, I exhausted my Zapier task quota which suspended my ability to cross-post automatically between my website and my social media accounts. The second, more serious, breakage occurred when my Mac mini's motherboard died. Up until that point, the Mac mini was acting as my build machine. It stored the raw markdown files of my blog in Dropbox and used scripts to build the site using Pelican and push it to my web server.

This worked after a fashion but it had limitations. My principle bone to pick was relying on my home broadband connection, which is flaky and has appalling upload speeds1. With every new article, pushing them to my server was taking longer and longer. If my connection broke (it happens) or if we had a black out, the system broke.

In my last post on this issue, I speculated on a number of possible solutions. I mulled over the idea of developing a bot and either running it on a local machine or one in the cloud. In the end, I decided on a much simpler approach.

I've moved Pelican and the website source files to my server. I wrote a simple shell script to build the site and copy the files to my website's directory. Because it's running on the same machine, the build process and transfer operation takes seconds rather than minutes. I've scheduled the process to occur every two hours using a cronjob; cron I've found is much more reliable on Linux than it is on macOS, partly because Apple wants you to schedule tasks using launchd scripts instead.

That takes care of build and deployment but it's only the second half of my publishing process. I built the back-end first because I my site was broken and I needed to fix it pronto.

So, what about the front-end? What do I do about getting new content to my server in the first place?

That required a bit more thought.

Initially, I planed to install DropBox on my server and continue to use that to keep my source files in sync across devices. However, my server has only 512mb of RAM and the official DropBox client is a bit of a memory hog. I thought selective sync might have reduced the load but 1. it didn't and 2. I don't think it's possible to selectively sync sub-folders. Alternatively, I could have used the DropBox API called from a Python script, but...meh.

Instead, I've opted for a manual push (trust me there's automation involved). This push is initiated from my personal devices when I'm ready to publish an article. In my old system, articles were synced and built even if was still drafting them (albeit as hidden drafts), so I think this is better, at least from an editorial standpoint.

The challenge was to create a push system that is secure, private (i.e. not a public web service) and that works from a Mac and iOS device. I also wanted a system whereby I could push different types of content (blog posts, pages, static files, images) and have them copied to the correct location in Pelican's content folder ready for the build process.

My solution was inspired by a post by Moving Electrons that I stumbled across when searching for ways to use Ulysses with Pelican2. I don't currently use Ulysses (I'll address why in another post), but there were several things I liked about his approach that I've adopted, namely:

  1. Using SSH to transfer the file and run a remote script
  2. The ability to push single files and zipped archives

Point 1 addresses all my requirements. It's secure (encrypted connection over SSH), private (the target script is not exposed to the web) and it's possible to initiate the process from a Mac (using bash + Automator) and iPhone/iPad (using Workflow).

Point 2 addresses covers my content needs. Most of the time, I'm just pushing a single markdown post but many of my posts include images and packing the post with its associated images in a zip file is awesome.

So, permit me to dive into the details.

Remote scripts

As with Moving Electrons, I created two scripts on my server to handle the uploads. Most of the work is done by a python script and it works as follows:

  1. Check to see if the passed file is supported (currently: markdown, zip, an image, PDF or PHP file).
  2. If it's a plain markdown file, read the file metadata and copy the file to the appropriate place in Pelican's content folder.
  3. If it's a zip file, unpack the archive, extract the metadata, create a new folder in content/images/posts/ named after the article slug and then copy the files appropriately.
  4. If it's an image, PDF or PHP file, simply copy them to the correct static directory.

To read the markdown file's metadata block, I use the Meta-data extension in Python's Markdown module. I read the slug and check if the file has a date stamp. If it doesn't have a datestamp, it means its a page rather than a post.

The python script called from Bash and it's the bash script I call in my remote connection. It's job is basically to handballs the incoming file to the python script and do some clean up afterwards.

If anyone want's the code, let me know and I'll publish it as a GitHub Gist.

Local scripts

As noted, the process is kicked off locally on device when I decide to publish. My draft files are stored in DropBox as plain markdown and thus accessible across my devices. I simply select the file and run the script.

With a Mac, this is easy because it has Bash and a user-accessible file system. I've wrapped the script in an Automator action so it's easy to run from Finder, rather than dropping down to the command line.

With iOS, you have to use Workflow, which has an action allowing you to execute a script remotely over SSH. This is fricking cool. My Workflow starts with a Dropbox file picker, extracts some basic information about the file and pushes it remotely. It works with any file, including zipped archives. Works from my iPhone and iPad in exactly the same way.

Heres what the Workflow looks like:

Publish to chrisrosser.net

Cross-posting to social media

Once a post is published I cross-post it to social media (mainly Twitter and Facebook). Previously, I used Zapier to do this but when I changed my site's structure to use clean, date-based URLs I exhausted my monthly quota. As a temporary replacement, I've resorted to using IFTTT which as a similar service only free to use.

Since it's working, I'm in no rush to change, however I may explore the idea of doing the cross-post myself using a python script. Not only will this give me more control but I'll be able more easily automate re-posting, scheduled posts or randomly post articles from my back catalogue.

That'll be in a later post however.


  1. Thanks for nothing Malcolm Turnbull 

  2. I'll come back to this notion when I release my forthcoming Ulysses review 

Share on: Diaspora*TwitterFacebookGoogle+Email