Recently I purchased a model# 131-00001-35 Displaylink iClever docking station to use with my laptop.

Next start the display link service and list available monitors. setprovideroutputsource activates the second display.

With the display running, xrandr will list available resolutions and refresh rates.

Screen 0 “eDP1” is my laptop screen with a resolution of 1600x99 and a refresh rate of 60 MHz.
Screen 1 “DVI-I-1” is my external display with a resolution of 1280x960.
Resolution can be set/changed with the xrandr command

Upon reboot, changes are lost. To automatically activate on boot, the systemctl displaylink.service command must be started as a service.
Systemctl requires root privileges.

The xrandr command needs to be executed after the X system is initialized during the boot process. Create an executable file with the required commands:

Make the file executable:

Go into settings > settings manager > session and startup > application autostart > add

With this configuration I need to be docked during boot so the second display can be recognized. If I dock after boot, I can manually execute the ./displaylink.sh command to activate the monitor. Without additional configuration I can close the lid of the laptop, inactivating the built in screen, but retaining signal to the external display.

Share

# Sentiment analysis - DT matrix

As I work with various packages related to text manipulation, I am beginning to realize what a mess the R package ecosystem can turn into. A variety of packages written by different contributers with no coordination amongst packages, overlapping functionality, colliding nomenclature. Many functions for “convenience” when base R could do the job. I also noticed this with packages like dplyr. I have commenced learning dplyr on multiple occasions only to find I don’t need it - I can do everything with base R without loading an extra package and learning new terminology. The problem I now encounter is that as these packages gain in popularity, code snippets and examples use them and I need to learn and understand the packages to make sense of the examples.

In my previous post on text manipulation I discussed the process of creating a corpus object. In this post I will investigate what can be done with a document term matrix. Starting with the previous post’s corpus:

There are a variety of methods available to inspect the dtmatrix:

Note the sparsity is 81%. Remove sparse terms and inspect:

Now sparsity is down to 4%. Calculate word frequencies and plot as a histogram.

We can use hierarchal clustering to group related words. I wouldn’t read much meaning into this for Picnic, but it is comforting to see the xml/html terms clustering together in the third group - a sort of positive control.

% asset_img clustr1.png %}

We can also use K-means clustering:

Back here I didn’t mention that when creating the epub, it would display fine on my computer, but would not display on my Nook. A solution was to pass the file through Calibre. I diff’ed files coming out of Calibre with my originals but was not able to determine the minimum set of changes required for Nook compatibility. You can download the Calibre modified epub here, and the original here. If you determine what those Nook requirements are, please inform me.

Share

# Sentiment analysis - Corpus

In a previous post on text manipulation I discussed text mining manipulations that could be performed with a data frame. In this post I will explore what can be done with a corpus. Start by importing the text manipulation package tm. tm has many useful methods for creating a corpus from various sources. My texts are in a directory as xhtml files, one per chapter. I will use VCorpus(DirSource()) to read the files into a corpus data object:

The variable “corp” is a 17 member list, each member containing a chapter. tm provides many useful methods for word munging, referred to as “transformations”. Transformations are applied with the tm_map() function. Below I remove white space, remove stop words, stem (i.e. remove common endings like “ing”, “es”, “s”), etc.:

A corpus object allows for the addition of meta data. I will add two events per chapter, which may be useful as overlays during graphing:

The corpus object is a list of lists. The main object has 17 elements, one for each chapter, but each chapter element is also a list. The “content” variable of the list is a list of the original xml file contents, with each element being either xml notation, a blank line, or a paragraph of text. Looking at the second chapter’s contents corp[[2]]\$content is a list of 18 elements. The first paragraph of the chapter begins with element 6:

This corpus is the end of the preprocessing stage of the document and will be the input for a document term matrix discussed in the next post

Share

# PAHR Sentiment Network

In my previous post on sentiment analysis I used a dataframe to plot the trajectory of sentiment across the novel Picnic at Hanging Rock. In this post I will use the same dataframe of non-unique, non-stop, greater than 3 character words (red line from an earlier post) to create a network of associated words. Words can be grouped by sentence, paragraph, or chapter. I have already removed stop words and punctuation, so I will use my previous grouping of every 15 words in the order they appear in the novel. Looking at my dataframe rows 10 to 20:

You can see the column “group” has grouped every 15 words. First I create a table of word cooccurences using the pair_count function, then I use ggraph to create the network graph. The number of cooccurences are reflected in edge opacity and width. At the time of this writing, ggraph was still in beta and had to be downloaded from github and built locally. The igraph package provides the graph_from_data_frame function.

Lets regroup every 25 words:

And now include only words with 5 occurences or more:

Share

# PAHR Sentiment Trajectory

In my previous post on sentiment I discussed the process of building data frames of chapter metrics and word lists. I will use the word data frame to monitor sentiment across the book. I am working with non-unique, non-stop, greater than 3 character words (red line from the previous post). Looking at the word list and comparing to text, I can see that the words are in the order that they appear in the novel. I will use the Bing sentiment determinations from the tidytext package to annotate each word as being either of positive or negative sentiment. I will then group by 15 words and calculate the average sentiment.

Plot as a line graph, with odd chapters colored black and even chapters colored grey. I also annotate a few moments of trauma within the narrative.

We can see that the novel starts with a positive sentiment - “Beautiful day for a picnic…” - which gradually moves into negative territory and remains there for the majority of the book.

Does sentiment analysis really work? Depends on how accurately word sentiment is characterized. Consider the word “drag”:

There are many instances of the word drag annotated as negative. Consider the sentence “It’s a drag that sentiment analysis isn’t reliable.” That would be drag in a negative context. In Picnic, a drag is a buggy pulled by horses, mentioned many times, imparting lots of undeserved negative sentiment to the novel. Drag in Picnic is neutral and should have been discarded. Inspecting the sentiment annotated word list, many other examples similar to drag could be found, some providing negative, some positive sentiment, on average probably cancelling each other out. Even more abundant are words properly annotated, which, on balance may convey the proper sentiment. I would be skeptical, though, of any sentiment analysis without a properly curated word list.

In the next post I will look at what can be done with a corpus.

Share

# Sentiment analysis

In my previous post on text manipulation I discussed the process of OCR and text munging to create a list of chapter contents. In this post I will investigate what can be done with a data-frame, and future posts will discuss using a corpus, and Document Term matrix.

Each chapter is an XML file so read those into a variable and inspect:

Create a dataframe that will provide the worklist through which I will process, as well as hold data about each chapter. The dataframe will contain a row for each chapter and tally information such as:

• bname: base name of the chapter XML file
• chpt: chapter number
• paragraphs: number of paragraphs
• total: total number of words
• nosmall: number of small (<4 characters) words
• uniques: number of unique words
• nonstop: number of non-stop words
• unnstop: number of unique non-stop words

I will read the chapter XML files into a list and at the same time count the number of paragraphs per chapter:

Each quote from a character is given its own paragraph, so a high paragraph count is indicative of lots of conversation.

Next create a list for each parameter I would like to extract. Stop words are provided by the tidytext package

The word count trends are the same for all categories, which is expected. I am interested in the “Non-stop(Big words)”, the red line, as I don’t want to normalize word dosage i.e. if the word “happy” is present 20 times, I want the 20x dosage of the happiness sentiment that I wouldn’t get using unique words. To visually inspect the word list I will simply pull out the first 50 words from each category for chapter 2:

Comparing nosmall to non.stops the first two words eliminated are words 9 and 24, “several” and “through”, two words I would agree don’t contribute to sentiment or content.

Next I will make a wordcloud of the entire book. To do so I must get the words into a dataframe.

Appropriately “rock” is the most frequent word. The word cloud contains many proper nouns. I will make a vector of these nouns, remove them from the collection of words and re-plot:

In the next post I will look at what can be done with a corpus.

Share

# ebook text manipulation

In my first post on creating an ebook I discussed the physical manipulation required to convert a paperback book into images and ultimately text files. Now I want to convert the text files into an ebook. Here is the sequence of events:

1. Organize text in chapter/page order
2. Read into a list, combining pages into chapters
3. Remove ligatures, common misspellings, combine hyphenated word fragments
4. Annotate with ebook XML tags
5. Generate the ebook

## Organize text

I start with my dataframe listing all files and their page numbers and read each individual page text file into an R list.

By counting the number of rows associated with each chapter in the dataframe, determine the number of pages per chapter then combine those pages into a list by chapters, 17 chapters total for Picnic. I will not annotate individual pages with page numbers, but will combine all pages into a chapter and let the epub format handle the flow.

## Remove ligatures, correct misspellings

OCR wil have introduced many misspellings, some of which can be corrected in bulk. I also want to remove ligatures, as this will interfere with word recognition when I am performing spell checking. Finally, the type setting process introduces many hyphenated words at the end of a line of text, to preserve readability. I want to remove these and let the epub flow text instead.

I create the method replaceforeignchars which will replace ligatures and common misspellings. The replacements to be executed are tabulated in the table “fromto”:

With pages combined into chapters remove hyphenated words at the end of lines.

In practice this didn’t work so well. Tokenizing a sentence removes capitalization which then has to be manually corrected. There were also occasions where a line was duplicated and this had to be manully corrected. I decided to remove hyphens manually while editing the text.

Next I print out each chapter as a page with xhtml annotation:

A useful command is saveRDS which allows for the saving of R objects. Here I save my list, which I can read back into an object, modify, and resave.

The package qdap provides an interactive method, check_spelling_interactive, for spell checking. A dialog bog will pop up for each unrecognized word in turn, providing you with a pick list of potential corrections or the opportunity to type in a correction manually.

I found that the pick list often did not provide the appropriate choice, capitalization is not preserved, and Picnic has many slang words that forced interaction with qdap too frequently. I decided to read through the text and correct manually.

Here is qdap flagging the French ‘alors’. There are settings for qdap that may improve the word choices available, but I did not spend the time investigating.

Once the chapters have been edited and proof read, it is time to create the epub. An epub is a zip file with the extension “.epub”. It also has a well defined directory layout and required files that define chapters, images, flow control, etc. ePub specifications and tutorials are readily available on line. Here I will show examples of some of the epub contents.

File toc.ncx

File container.xml:

File - an example chapter:

Once the files are in order they are zipped into an epub. Navigate to the directory containing your files and:

Some of the switches I am using:

X: Exclude extra file attributes (permissions, ownership, anything that adds extra bytes)

-r: Recurse into directories

-9: Slowest but most optimized compression

-D: Do not created directory entries in the zip archive

-x .DS_Store: Don’t include Mac OS X’s hidden file of snapshots etc.

The next post in this series discusses sentiment analysis.

Share

# Create an eBook

One of my all-time favorite movies is Picnic at Hanging Rock by Peter Weir. Every scene is a painting, and the atmosphere transports you back to the Australian bush of 1900. The movie is based on a book by Joan Lindsay, who had the genius to leave the plot’s main mystery unresolved. During her lifetime she never discouraged anyone from claiming the book was based on real events. After her death in 1984 a “lost” final chapter was discovered, which purportedly resolved the mystery. Most (including myself) believe the final chapter is a hoax.
Recently on R-bloggers there has been a run on articles discussing sentiment analysis. I thought it would be fun to text mine and sentiment analyze Picnic. I purchased a paperback version of the book years ago, which I read while on vacation.

My book is old and the pages are yellowing. Time to preserve it for prosterity.
In this post I will discuss converting a paperback into an ebook. Future posts will discuss the text mining/sentiment analysis. The steps are:

1. Cut off the spine
2. Scan the pages, one image per page
3. Perform OCR (optical character recognition)
4. Assemble the text in page order

As an aside, one of the most impressive crowd sourcing pieces of software I have seen is Project Gutenberg’s Distributed Proofreaders website. Dump in your scanned images and the site will coordinate proofreading and text assembly. Procedures are in place for managing the workflow, resolving discrepancies, motivating volunteers, etc. Picnic doesn’t qualify for this treatment as it is not in the public domain. I will have to do it myself.

## Cut off the spine

I used a single edge razor blade. Cut as smoothly and straight as possible. Keep the pages in numerical order.

## Scan

I have an HP OfficeJet 5610 All-in-One multifunction printer equipped with a document feeder. I am working with Debian Linux, so I use Xsane as the scanning software. Searching the web I find that there is a lot of discussion concerning the optimum resolution, color, and file format that should be used for images destined for OCR. I decided on 300dpi grayscale TIFF, which in retrospect was a good choice. I load one chapter at a time onto the document feeder positioned such that the smooth edge enters the feeder first. This results in odd pages being rotated 90 degrees counterclockwise, and even pages being rotated 90 degrees clockwise. Xsane will auto-number the images, but I will supply a prefix following a convention: “chptNN[e|o]-NNNN” where e|o is e or o standing for even or odd page numbers, NN for the chapter number and NNNN is the Xsane supplied image number. The image number will start at 1 for each set (even or odd) of chapter pages.
Once all images are scanned, I will need to rotate either 90 or 270 degrees to prepare for OCR, using the rotate.image function from the adimpro package. I use the following code, depositing the rotated images in a separate directory:

## OCR

Next perform OCR on each image. I use tesseract from Google which has a Debian package.

Seems to work well. Here is a comparison of image and text:

## Assemble text

I need to create a table of textfile name, page number, words per page etc. to coordinate assembly of the final text and assist with future text mining. Here are the contents of the all.files variable:

Make a data.frame extracting relevant information from the filenames:

If I look at some random pages, I can see that usually the second to the last line has the page number, when it exists on a page:

Many of the page numbers are corrupt i.e. there are random characters thrown in by mistake by the OCR. I make note of these characters and use gsub to get rid of them. Some escape my efforts, but enough are accurate that I can compare the extracted page number to the expected page number, determined by the order in which the pages were fed into the scanner.
I will extract the second to the last line (stll) and include it in my table:

For the expected page number, create a column “chpteo” which is the concatenation of chptr number and e or o for even odd. Sequentially number these by 2.

Here is what my data.frame “d2” looks like:

“page” is the expected page number based on scanning order.
“pnumber” is the OCR extracted page. Compare them:

Looks good. There are some OCR errors but enough come through to verify that the order is correct. Now I can sort on page and use that order to assemble the ebook. Read each page file and append to an output file. Since I want to be able to refer to images to correct problems, I also insert the image information between text files:

Here is what a page junction looks like:

You can see the page number when present, which will provide a method to confirm the correct order. The file name is included, which will allow me to go back to the original image during the proofreading process to verify words I may be uncertain of.

It would be nice to have the image and text juxtaposed during the proofreading process. To see what this looks like, take a look at Project Gutenberg’s Distributed Proofreaders website. I will have to read on a device that allows me to refer to the images when needed. Once the proofreading is complete, I will be ready for sentiment analysis.

The next post in this series discusses text manipulation.

Share

# Euler-98

https://projecteuler.net/problem=98

By replacing each of the letters in the word CARE with 1, 2, 9, and 6 respectively, we form a square number: 1296 = 36^2. What is remarkable is that, by using the same digital substitutions, the anagram, RACE, also forms a square number: 9216 = 96^2. We shall call CARE (and RACE) a square anagram word pair and specify further that leading zeroes are not permitted, neither may a different letter have the same digital value as another letter.

Using words.txt, a 16K text file containing nearly two-thousand common English words, find all the square anagram word pairs (a palindromic word is NOT considered to be an anagram of itself).

What is the largest square number formed by any member of such a pair?

NOTE: All anagrams formed must be contained in the given text file.

There are 1786 words, the longest is 14 characters and the smallest is 1 character. Since we have already been given the square anagram word pair CARE / RACE I will assume the answer is greater than 4 characters and ignore all 1-4 character words.

Write a function compare.word that will take 2 words, sort the characters and determine if the two words have the same characters.

There are 19 anagrams with lengths of 5, 6, 8, or 9 characters.

Now figure out how many squares of length 5,6,8, or 9 exist.

These squares will be between 100^2 to 31623^2

There are about 41,000 squares. Assign a square to a word, rearrange according to the square pair, determine if the new word is in the anagram list.

CENTRE assigns different numbers to the ‘E’ violating one of the rules so BROAD/BOARD is the anagram pair and the largest square is 18769

Share

# Euler-17

https://projecteuler.net/problem=17

If the numbers 1 to 5 are written out in words: one, two, three, four, five, then there are 3 + 3 + 5 + 4 + 4 = 19 letters used in total.

If all the numbers from 1 to 1000 (one thousand) inclusive were written out in words, how many letters would be used?

NOTE: Do not count spaces or hyphens. For example, 342 (three hundred and forty-two) contains 23 letters and 115 (one hundred and fifteen) contains 20 letters. The use of “and” when writing out numbers is in compliance with British usage.

Share