Friday, June 08, 2018

nice historiographical point

From a review of a history book:

"Written more in the tradition of Burckhardt and Braudel than Marx and Hobsbawm, the book is an exercise in seeing rather than storytelling."

Tuesday, May 08, 2018

Photo of Prof. Robert Charles Zaehner

Photos of Prof. Robert Charles Zaehner are rare.  One was published in his obituary in the journal Iran 13 (1975):

Tuesday, May 01, 2018

Etymology is not Lexicography

To find the meaning of a word, one must examine many sentences and then think about the use of that word in many contexts. 
See Franklin Edgerton, "Etymology and Interpretation," Proceedings of the American Philosophical Society, 1938 , 705-14.

Wednesday, April 25, 2018

Online issues of Indological journals

I repeatedly find my self searching for back issues of journals, and between searches I forget what I learned the first time.  Here are some notes to myself, to remind myself what can be found online.  I'm writing this in April 2018, so moving walls will have moved in future years.
  • Acta Orientalia (Hungarian)
  • Acta Orientalia (Scandinavian)
    • some issues at, but I can't find a full run
  • Adyar Library Bulletin
    misc articles: private
  • Asiatische Studien ETH Zuerich
  • Bulletin de l'École française d'Extrême-Orient1901-2014 JSTOR
  • Bulletin of SOAS 1917-1940 JSTOR
    1940-2012 JSTOR
  • Indian Historical Quarterly (1925-1963, then NS)
  •  Indian Historical Review published since 1974 by the Indian Council of Historical Research
  • Indo-Iranian Journal1957-2012 JSTOR
  • Indologica Taurinensia 1984, 1995-2008 DHIPR at Univie
    1973-2014 (all issues)  Asia Institute, Torino
  • Journal asiatique (with gaps):
  • Journal of the Asiatic Society of Bengal
  • Journal of the Asiatic Society of Bombay
  • Journal of the Royal Asiatic Society
    • 1824-1834 JSTOR (Transactions of the Royal Asiatic Society of Great Britain and Ireland)
    • 1834-1990 JSTOR (The Journal of the Royal Asiatic Society of Great Britain and Ireland
    • 1991-2013 JSTOR (Journal of the Royal Asiatic Society)
  • Philosophy East and West 1951-2014 JSTOR
  • The Pandit many issues identified and linked at Shreevatsa's blog.
  • Wiener Zeitschrift für die Kunde Süd- und Ostasiens
    1957-1983 private
    1991-2013 JSTOR , five-year moving wall.
  • Zeitschrift der Deutschen Morgenländischen Gesellschaft
    1847-2017 JSTOR

Tuesday, April 17, 2018

Improving PDFs

I sometimes  do some processing on PDFs if I think they are important, or I want to read them more conveniently.  I was trying to explain my techniques to my students, recently, and I realized that I use a mixture of tools that are not at all obvious or easy to explain to someone not familiar with Unix.
So I'm going to write down here what I do, so that at least the information is available in one place.  I assume a general knowledge of Linux and an ability to work with command-line commands.

If I receive a PDF that is a scanned book, with 1 PDF page = one book opening, I want to chop it up so that 1 PDF page = 1 book page.
  • make a working directory
  • use pdftk to unpack the PDF into one file per page:
    > pdftk foobar.pdf burst
  • I now have a directory full of one-page PDFs.  Nice.
  • convert them into jpegs using pdf2jpegs, a shell script that I wrote that contains this text:
    # convert a directory full of pdfs into jpegs
    for i in *.pdf; do pdftoppm -jpeg -r 400 "$i" >"$i.jpg"; done
  • I now have a directory full of jpegs, one jpeg per page.
  • Start the utility scan-tailor and use it to
    • separate left and right pages into separate files
    • straighten the pages
    • select the text area of each page
    • create a margin around the text
    • finally, write out the resulting new pages
  • I now have a directory (../out) full of TIFF files, one page per file, smart.
  •  Combine the TIFFs into a single PDF using my shell script tiffs2pdf:
    # Create a PDF file from all the .tiff files in the current directory.
    # The argument provides the name of the output file (with .pdf appended).
    echo "Created a PDF from a directory full of .tif files"
    echo "Single argument - the filename of the output PDF (no .pdf extension)"
    tiffcp *.tif "/tmp/${1}.tiff"
    tiff2pdf "/tmp/${1}.tiff" > "${1}.pdf"
    echo "Created ${1}.pdf"
    rm "/tmp/${1}.tiff"
    echo "Removed temporary file /tmp/${1}.tiff"

    # thanks to
  • I now have a nice PDF that has one smart page per PDF page. 
  • If I want it OCRed, then I usually use Adobe Acrobat, a commercial program. But if I'm uploading to, that isn't necessary because does the OCR work using Abbyy.

That's all, folks!

Sunday, February 04, 2018

My personal protest withdrawal from USA academic conferences

I regret to say that I am cancelling my visit to this year's USA conferences.  Several USA conferences are among my favourite academic gatherings, and I will sorely miss the occasions.  I have made this difficult decision for political and personal reasons.

Like many, I am deeply dismayed by the statements and policies of current USA governance.  The statements and decisions that have been issued from the USA government over the last year, including the disgracefully vulgar, racist statements just last month, have been increasingly repellent.

I have been torn about whether to travel to the USA to work with and support all my liberal, educated, humanitarian friends there, or whether to take a moral stand to treat the USA as a pariah State.  I still do not feel certain about this matter.  Perhaps it is better to go to the USA and support my embattled friends and colleagues?  Last year, after soul-searching, I went to the American Oriental Society conference in LA.   But after a year of thinking about these issues, I have decided that I will take a different stand at this time, and stay away

My thinking on these issues was nudged forward decisively by a recent report that I heard on the BBC World Service from an Indian lady journalist who described the surprise, compulsory biometric facial scanning that she was subjected to at Dulles, on attempting to exit from the USA.  She was told that it was mandatory for non-USA citizens and that she could be detained in the USA if she did not comply.  The USA's Department of Homeland Security has recently introduced this biometric face-scanning technology at many airports for departing passengers (see here). The technology has been shown to be deeply flawed technically, morally and legally (see NY Times report of Dec 21, and the Georgetown University Law School report).  Amongst other profound problems, the software over-targets women and people with dark skin colour, producing 4% of false-positives in these cases.  As the Georgetown report finds,

Innocent people may be pulled from the line at the boarding gate and subjected to manual fingerprinting at higher rates as a result of their complexion or gender.
The technology being compulsorily applied to all non-USA citizens is demeaning, invasive and violates an individual's right to privacy.
USA Immigration already has assumed the right to require all social media passwords and to review up to five years of past social media postings, and copy all data from mobile phones or laptops [1, 2, etc. etc.].  This again violates individual privacy rights.  It also requires individuals to violate the terms of their contracts with service providers who require login details not to be shared.  In the case of university staff such as myself, it also violates the University's terms of use of my laptop, that contains or may refer to private information concerning my students.  I am required by the University of Alberta to keep my laptop and phone encrypted and I may not share the data with anyone outside the University.

Canada recently agreed, under Bill C-23, that the USA immigration authorities can arrest people while still in Canada, when they go through the USA immigration process while still at Canadian ports (1, 2, 3, etc. etc.).

I am a British Citizen and a European Citizen living as a resident in Canada.  I have no criminal record, nor any specific reason to expect difficulty entering  or leaving the USA.  While I am ashamed by the need to assert it, I am a white, Caucasian, male university professor.  From the point of view of USA governance, I am almost as good as a Norwegian.  But I am acutely aware that many good people in the USA, or entering and leaving the USA, including now my Indian friends, do not have the same automatic advantages of skin colour, gender or citizenship.  Families are being split up, people are being forced to travel to war-torn countries or otherwise being denied the American promise of safety symbolized since 1875 by the Statue of Liberty.  All international travellers are routinely being subjected to threatening, invasive and demeaning procedures.

For all these reasons, I have decided that I wish protest the USA governance and USA immigration policies and procedures.  This is done both in solidarity with my friends, and also as a matter of personal choice, because I do not wish to expose myself personally to the USA immigration service's demeaning and threatening procedures.

I am extremly uncertain about this decision. Perhaps all the above reasons should rather drive me to visit my colleagues in the USA and offer them friendship and collegiality in difficult times. I don't know what the right answer is. Last year, I had many of the same misgivings, but I decided to visit the USA. On this occasion I'm taking the opposite decision, and staying away.

I apologize to my friends in the USA and I look forward to happier times.  I only hope my protest contributes in a small way to encouraging good people in the USA to vote wisely and to lobby their representatives energetically.

Saturday, August 19, 2017

Mini edition environment for LaTeX

When writing an article or book using the LaTeX document preparation system, Indologists sometimes want to have a śloka or two printed on the page with some text-critical notes.  A famous example of this kind of layout is Wilhelm Rao's 1977 edition of the Vākyapadīya that edits the whole text this way, in each verse having its own mini-critical-edition format.

Here is a simple way to get this kind of layout in LaTeX.  I create a new environment called "miniedition":

    {\addtolength{\textwidth}{-\rightmargin} % width of the quote environment

This puts a minipage environment inside a quote environment, switches on italics and switches off the footnote rule.  It's pretty simple.  The clever bit is done by the minipage environment itself, that makes footnotes inside its own box, not at the bottom of the page.  The footnote numbers are lowercase alphabetical counters, to avoid confusion with footnotes outside the environment.
Here's how you would use it, and the result:

pāraṃparyād \emph{ṛte ’pi}\footnote{N: \emph{upataṃ}?} svayam 
\emph{anubhavanād}\footnote{My conjecture; both manuscripts are one syllable 
short. K: 
\emph{anubhavad}; N: \emph{anubhavād}.} granthajārthasya samyak\\
pūrṇābdīyaṃ phalaṃ sadgrahagaṇitavidāṃ \emph{aṃhrireṇoḥ}\footnote{N: 
with identical meaning.} \emph{prasādāt}\footnote{N: \emph{prasādaḥ}.}||

Output (with added text before and after:
The miniedition text is indented left and right, and followed immediately by the critical notes.  The footnotes at the bottom of the page are a separate series.  In both the main body text and the miniedition, you just use \footnote{} for your notes; LaTeX does the right thing by itself according to context. 
The miniedition environment does not break across pages, it is meant for for short fragments of text, one or two ślokas. 
This example is taken from Gansten 2017.

Monday, July 24, 2017

Biblatex, citations and bibliography sorting

"I want to sort in-text citations by year, but bibliography by name."  So begins one of the questions at a Stackexchange.  That's just what I want too.

I want to put multiple references in a \cite{} command without caring about their sequence, and have them automatically print in year-order.  Then, I want my bibliography to be ordinary, i.e., printed in author-date order.

The discussion at the above site is tricky, but the answer by moewe works.

In a nutshell here's what you actually do:
Hello world!\footcites{ref1,ref3,ref0,ref4}

This will print your footnoted citations in ascending order of year, and your bibliography in ascending order of author.

Saturday, June 17, 2017

Expanded Devanāgarī font comparison

In 2012 I posted a comparison of some Devanāgarī fonts that were around at the time.

Here's an update, with some more fonts and more concise TeX code:


% set up a font, print its name, and typeset the test text:
\newcommand{\FontTrial}[1]{ %
    % print the font name:
    {\eng #1} \TestText }

\newcommand{\TestText}{ = शक्ति, kārtsnyam ṣaṭtriṃśad;
{\addfontfeatures{Language=Hindi} Hindī =
        शक्ति कार्त्स्न्यम्}\par}


\FontTrial{Sanskrit 2003}

\setmainfont[FakeStretch=1.08,Mapping=RomDev]{Sanskrit 2003}
\newfontfamily\eng[FakeStretch=1.08,Language=English]{Sanskrit 2003}
{\eng Sanskrit 2003+} \TestText

\FontTrial{Murty Hindi}
\FontTrial{Murty Sanskrit}
% ... etcetera



Lessons learned:
  • Only Sanskrit 2003, Murty Sanskrit, Chandas, Uttara, Siddhanta, and Shobhika do a full conjunct consonant in ṣaṭtriṃśad.  The others fake it with a virāma.
  • Akshar Unicode's "prasanna" has a lazy horizontal conjunct.
  • Free Sans and Free Serif are the only fonts that distinguish Sanskrit and Hindi (see kārtsnyam).
  • Nakula, Sahadeva, Murty Hindi, Shobhika, Annapurna, Akshar Unicode, Kalimati, and Santipur do a lazy, horizontal conjunct consonant in kārtsnyam.
  • There's a special issue affecting FreeSans and FreeSerif.  I described this in a post in 2012.  The publicly distributed version of the fonts fails to make some important conjunct consonants, like त्रि and प्र correctly.  Unfortunately this issue has not changed in the intervening five years. The examples shown here use a fresh compilation of the fonts, based on downloading and compiling the development version at the Savannah repository (June 2017).  (Here's a link to my compiled fonts.)  This Savannah development version works better for  Devanagari, but has problems elsewhere, according to their author Stevan White.

Thursday, June 15, 2017

Preserve the Mess

Many years ago, I attended a Digital Humanities conference, Toronto 1989 I think it was, and heard a paper by Jocelyn Small about using digital tools to manage large datasets.  She was talking about images, but her ideas applied to any data.

One of her key slogans was, "Preserve the Mess."  This approach is now completely normalized by Google search, Google Mail, etc., and we all take it for granted.  But it's worth remembering that this was a major conceptual breakthrough.

Before this approach, everyone thought that the way to find stuff was to use subject indexes.  And subject indexing is expensive, difficult, subjective and structurally imperfect.  What subject headings would you use for the Mahābhārata, for example? I think most people would agree that it is difficult to impossible to arrive at a simple statement of the subject matter of the Mbh that is actually worth having.  Of course, we can all play nothing-buttery, "the Mbh is nothing but a family quarrel," but that's not a serious approach to the problem.  If we pervade the epic with our keywords and subject index terms, we are trying to make the text more accurate than it is, and our exercise is culture-bound and subjective.

"Preserving the mess" means that we leave the data alone.  Rather, we put the intelligence and power into our tools for accessing the data.  We use fuzzy-matching, pattern recognition, machine learning, but all applied to the raw data which is not itself manipulated or changed.

A published version of Small's ideas appeared in 1991:

As she says, p. 52,
Thus Principle Number One is Aristotelian: "Do not make your datum more accurate than it is. This principle may be rephrased as, "Preserve the Mess."

Tuesday, June 13, 2017

Del latitude xinput settings


Put the following commands in a file (, make the file executable (chmod +x, and then run it.

xinput --set-prop "AlpsPS/2 ALPS DualPoint Stick" "Device Accel Constant Deceleration" 8
xinput --set-prop "AlpsPS/2 ALPS DualPoint Stick" "Device Accel Velocity Scaling" .8
xinput --set-prop "AlpsPS/2 ALPS DualPoint Stick" "Device Accel Adaptive Deceleration" 8

You can run this command on startup from the Startup Applications menu.

Friday, June 09, 2017

Lining or Oldstyle numerals in math typesetting?

The classic work,

has the following remarks in paragraph 95, p. 63:
Relative size of numerals in tables.-- André says on this point: "In certain numerical tables, as those of
 Schrön, all numerals are of the same height. In certain other tables, as those of Lalande, of Callet, of Houël, of Dupuis, they have unequal heights: the 7 and 9 are prolonged downward; 3, 4, 5, 6 and 8 extend upward; while 1 and 2 do not reach above nor below the central body of the writing.... The unequal numerals, by their very inequality, render the long train of numerals easier to read; numerals of uniform height are less legible." (D. André, Des notations mathématiques (Paris, 1909), p .9).

Thursday, May 18, 2017

YAAC ("yet again about copyright")

Some sensible remarks from the Director of UofA's copyright office.  Importantly, the UofA relinquishes its rights to the copyright of work written by faculty members.  Faculty members own the copyright of their writings.

Monday, May 15, 2017

IBUS bug fix ... again (sigh!)

Further to, I found the same bug cropping up in Linux Mint 18.1, with IBUS 1.15.11.

Some applications don't like IBUS + m17n, and certain input mim files. For example, LibreOffice and JabRef.  Trying to type "ācārya" will give the result is "ācāry a". And in other strings, some letters are inverted: "is" becomes "si" and so forth.

Here's the fix.

Create a file called, say with the following one-line content:
Copy the file to the directory /etc/profile.d/, like this:
sudo cp /etc/profile.d
Make the file executable, like this:
sudo chmod +x /etc/profile.d/
Logout and login again.


This fixes the behaviour of IBUS + m17n with most applications, including LibreOffice and Java applications like JabRef.  However, some applications compiled with QT5 still have problems.  So, for example, you have to use the version of TeXStudio that is compiled with QT4, not QT5. [Update September 2018: QT5 now works fine with Ibus, so one can use the QT5 version of TeXstudio with no problem.]

Wednesday, March 22, 2017

Dvandva compounds of adjectives

Some discussions supporting the existence of this formation:
  • Speyer, Sanskrit Syntax, para 208
  • Whitney, para 1257
  • Burrow, The Skt Language, p.219

Thursday, September 29, 2016

What's the point of an academic journal?


I find that I often read my academic colleagues' papers at and other similar repositories, or they send me their drafts directly.  I am not always aware of whether the paper has been published or not.  Sometimes I can see that I'm looking at a word-processed document (double spacing, etc.); other times the paper is so smart it's impossible to distinguish from a formally-published piece of writing (LaTeX etc.).

Reading colleagues' drafts gives me access to the cutting edge of recent research.  Reading in a journal can mean I'm looking at something the author had finished with one, two or even three years ago.  In that sense, reading drafts is like attending a conference.  You find out what's going on, even if the materials are rough at the edges.  You participate in the current conversation.

In many ways, reading colleagues' writings informally like this is more similar to the medieval ways of knowledge-exchange that were dominated by letter-writing.  The most famous example is Mersenne (fl. 1600), who was at the centre of a very important network of letter-writers, and just preceded the founding of the first academic journal, Henry Oldenberg's Philosophical Transactions of the Royal Society (founded 1665).

What am I missing?

Editorial control

What I don't get by reading private drafts is the curatorial intervention of a board of editors.  A journal's editorial board acts as a gatekeeper for knowledge, making decisions about what is worth propagating and what is not worth propagating.  The board also makes small improvements and changes to submissions, required since many academic authors are poor writers, and because of the natural processes of error. So, a good editorial board makes curatorial decisions about what to display, and improves quality.

Counter-argument: Many editorial boards don't do their work professionally. The extended "advisory board members" are window-dressing; the real editorial activity is often carried out by only one dynamic person, perhaps with secretarial support.  This depends, of course, on the size of the journal and the academic field it serves.  I'm thinking of sub-fields in the humanities.

Archiving and findability

A journal also provides archival storage for the long term.  This is critically important.  An essential process in academic work is to "consult the archive."  The archive has to actually be there in order to be accessed.  A journal - in print or electronically - offers a stable way of finding scholars' work through metadata tagging (aka cataloguing), and through long-term physical or electronic storage.  If I read a colleagues' draft, I may not be able to find it again in a year's time.  Is it still at  Where?  Did I save a copy on my hard drive?  Is my hard drive well-organized and backed up (in which case, is it a journal of sorts?)?

Counter-argument:  Are electronic journals archival?  Are they going to be findable in a decade's time?  Some are, some aren't.  The same goes for print, but print is - at the present time - more durable, and more likely to be findable in future years.  An example is the All India Ayurvedic Directory, published in the years around the 1940s.  A very valuable document of social and medical history.  It's unavailable through normal channels.  Only a couple of issues have been microfilmed or are in libraries.  Most of the journal is probably available in Kottayam or Trissur in Kerala, but it would take a journey to find it and a lot of local diplomatic effort to be given permission to see it.  Nevertheless, it probably exists, just.


A journal may develop a reputation that facilitates trust in the articles published by that journal.  This is primarily of importance for people who don't have time to read for themselves and to engage in the primary scholarly activity of thinking and making judgements based on arguments and evidence.  A journal's prestige may also play a part in embedding it in networks of scholarly trust and shared but not known knowledge, in the sense developed by Michael Polanyi (Personal Knowledge, The Tacit Dimension and other writings).


At the moment, I can't think of any other justifications for the existence of journals.  But if editorial functions and long-term storage work properly, they are major factors that are worth having.

Further reading


Tuesday, July 26, 2016

Getting Xindy to work for IAST-encoded text

Xindy is an index-processor for use with TeX and LaTeX.  It is a successor to Makeindex, and is the standard system for formatting and sorting indexes and glossaries that go with LaTeX documents.


The main distribution of Xindy is included in TeXlive and is downloadable at CTAN.  At the time of writing (July 2016), this is version 2.5.1 (2014).  The source code used to be maintained at Sourceforge (Xindy at sourceforge, currently version 2.3 from 2008), but a later version is now available at Github (Xindy at Github).  Since Github has version 2.5.0, source development and compilation for TeXlive must be going on somewhere else, but I don't know where.   The best place to fetch Xindy if you want to tinker with it is from the CTAN base directory.

Use and benefits

The development of Xindy is uneven, given the various repositories with different versions.  The documentation is also of limited use to beginners, being technical and out of date (the examples in the tutorial do not work with the current software release).   Nevertheless, it is a very flexible and powerful program, and does a great job when it works.  And for many texts and nearly fifty modern languages, it "just works," which is great.

In LaTeX documents, the packages index, makeidx or imakeidx will normally be used to provide the macros needed for indexes.  Xindy does the rest.

Sanskrit and IAST

For my writing, I normally use XeTeX with LaTeX and I write using Unicode UTF8 encoding and the IAST transliteration scheme when doing Sanskrit in Roman script.  (For Devanagari I use normal Unicode encoding.)

Xindy by itself doesn't recognize the IAST accented characters like vowels with a macron or consonants with under-dot.  I found that setting Xindy's language to "general" did a pretty good job of nearly all the characters, but not all.  I got words with ā-, ṛ- etc. at the beginning of the index, before "A."


The program for creating a new "alphabet" for Xindy is in Perl and is called make-rules.  I couldn't initially find it at all, because it isn't at the Sourceforge or GitHub repositories (or I couldn't find it).  Later, I found it at CTAN, and I wish I'd seen that earlier.

Finally, I could not get make-rules to work.  The documentation and tutorials simply didn't provide me with enough accurate information to start, as a beginner, and get a workable result from make-rules.

Solution (aka kludge)

I therefore made up a very simple Xindy style file, IAST.xdy, with the following content:
(merge-rule "ā" "a")    (merge-rule "Ā" "a")    (merge-rule "ḍ" "d")    (merge-rule "Ḍ" "d")    (merge-rule "ḥ" "h")    (merge-rule "Ḥ" "h")    (merge-rule "ī" "i")    (merge-rule "Ī" "i")    (merge-rule "ḹ" "l")  (merge-rule "ḷ" "l")    (merge-rule "Ḹ" "l")  (merge-rule "Ḷ" "l")    (merge-rule "ṃ" "m")            (merge-rule "Ṃ" "m")            (merge-rule "ṅ" "n")    (merge-rule "Ṅ" "n")    (merge-rule "ṇ" "n")    (merge-rule "Ṇ" "n")    (merge-rule "ṝ" "r")     (merge-rule "ṛ" "r")    (merge-rule "Ṝ" "r")     (merge-rule "Ṛ" "r")    (merge-rule "ṣ" "s")    (merge-rule "Ṣ" "s")    (merge-rule "ś" "s")    (merge-rule "Ś" "s")    (merge-rule "ṭ" "t")    (merge-rule "Ṭ" "t")    (merge-rule "ū" "u")   (merge-rule "Ū" "u")
  • I place IAST.xdy in my local TeX tree, namely as  ../localtexmf/xindy/modules/IAST.xdy 
  • I run "sudo mktexlsr" to rebuild the TeXlive indexes so that Xindy can find IAST.xdy
  • I then run Xindy from the Linux command line with the following syntax:

    texindy -I xelatex -M iast.xyd -L general -o foobar.ind foobar.idx
This last point is a Knuthian white lie (TeXbook, vii).  I currently use TeXStudio for actual writing, so the above "command line" is entered into TeXStudio's "options/configure/commands" menu and invoked with a convenient function-key shortcut. 


  • texindy is just xindy with some tweaks for use with LaTeX
  • -I means the input file use "xelatex" encoding, i.e., UTF8
  • -M means please use this style file
  • -L means please use the pseudo-language "general" which does the right thing with most UTF8-encoded Roman / European text.
  • "foobar" is replaced by your TeX filename; in TeXStudio's syntax it's "%", which stands for whatever file you're currently working on, 

I'm now getting results that look like this, which is what I was after:

I'm sure that all this could be done more elegantly and completely.  In the longer run, I hope a successor to IAST.xdy might take its place alongside all the other languages formally supported by Xindy.
While working on this, I sent cries for help to the Xindy discussion list.  Zdenek Wagner replied, and shared with me work he has done towards indexing Hindi and Marathi in Devanagari script.