Friday, March 21, 2014

Notes on the Ukrainian crisis

The Nuland-Pyatt recording

[The following is an extract, authored by me, from the WikiPedia page on this topic.  My text has been deleted twice by someone with a Russian-sounding name, NazariyKaminski, so I am placing the text here in case it is deleted again.]

Multiple versions of the phone recording

There are at least two versions of the audio recording of the Nuland-Pyatt conversation in circulation at YouTube and elsewhere, with quite different meanings. The wording and syntax of one of them, that was published on 7 February 2014 by GlobalTVz, reveals that Nuland's expletive remark can be understood not as her insulting the EU, but rather her response to Pyatt's statement that the Russians will try to torpedo the process of increasing EU influence in Ukraine.[Recording and transcription at The Archive.]   In this version, she is agreeing with Pyatt, and her expletive expresses her view of the Russian position on EU influence in Ukraine. The Russians don't care what the EU thinks or does. In other versions of the audio recording, for example that transcribed on the BBC website, the F*** phrase expresses Nuland's own opinion of the EU. It it not presently known which version corresponds most closely to what Nuland and Pyatt actually said. The fact that various versions of this controversial recording are in circulation has not (as of 28 Feb 2014) been publicly addressed, and is part of the larger "cui bono" question about who recorded and who released this phone conversation.

Facebook posting

[I posted the following text to FB on 19 March]
I feel impotent saying this, but I'll say it nonetheless. I'm horrified and dismayed by the Russian theft of Crimea. I can scarcely believe that it has happened. Russia's aggression and willingness to invade a neighbouring country seems so archaic, so orthogonal to the direction of contemporary world politics. People go to prison for stealing a car, but Putin and his cabinet have stolen a whole region. The method was so sly, so fast, so relentlessly prosecuted with lies and threats. I wonder if the Russian soldiers felt any shame when their commanders told them to strip off their identifications and patrol the streets without saying they were Russian? I thought this kind of "Russian Bear" behaviour was history. But no, we should still fear Russia, and our governments should prepare for further aggression, bullying and lies. Maybe Crimea will be free again one day, after Putin is dead and a new leader reinstates Khruschev's transfer of Crimea to Ukraine (Khruschev was a Ukrainian himself).

The word "Crimea" is a Tatar word, and Tatars ruled the area from the early middle ages until 1783, the first time that Russia annexed the region. Supporting the Ottoman Empire, France and Britain went to war against Russia in 1854 (the Crimean War), and beat Russia. In the Russian Civil War, after the Revolution, the Red Army murdered 50,000 White Russians in Crimea in 1920, and annexed Crimea again to the USSR the following year. Under Russian rule, Crimea experienced two devastating famines. The first was immediately after annexation, 1921-1922, when the Bolsheviks forced the requisitioning to Moscow of grain and foodstuff from the Crimean countryside. 100,000 Crimean Tatars died of starvation. Only ten years later, the man-made disaster of the Great Famine of 1932-33 was the direct result of the Soviet policies of collectivization and industrialization, and Moscow's policy of exporting Crimean grain at below-market prices in order to destabilize world prices. It killed between six and seven million Ukrainians and Crimeans. Russia started handing out Russian passports to Crimeans in 2008, and promoting its international policy of militarily protecting "Russian citizens" in other countries.

Friday, February 28, 2014

The Origins of a Famous Yogic/Tantric Image

Mark Singleton
Ellen Goldberg
I'm delighted by the arrival from the bookshop of my copy of the excellent new book Gurus of Modern Yoga edited by Mark Singleton and Ellen Goldberg (ISBN 978-0-19-993872-8).

Full details of the book can be had from Peter Wyzlic's indologica.de website.





Yogini Sunita's Pranayama Image

Suzanne Newcombe
Suzanne Newcombe's chapter, "The Institutionalization of the Yoga Tradition: `Gurus' B. K. S. Iyengar and Yogini Sunita in Britain," is an outstanding description and evaluation of the impact of two yoga teachers in the UK.  One of Suzanne's subjects is Yogini Sunita (aka Bernadette Cabral), originally a Catholic from Bombay.  Although tragically killed by a car at the early age of 38, her work is kept alive by her son Kenneth Cabral and other yoga teachers (http://www.pranayama-yoga.co.uk).

Yogini Sunita published a book in 1965 called Pranayama, The Art of
Relaxation, The Lotus and the Rose (Worldcat).  It contained an illustration that has gone on to become one of the most famous and iconic images in yoga publishing.  The image is a line drawing in black on a white background showing the outline of a seated, cross-legged meditator superimposed on a wild network of lines, with annotations in Devanagari script. The Sanskrit word प्राणायाम (prāṇāyāma) "breath control" labels the image in the top right-hand corner.  The smaller writing, along the lines, is more or less illegible in all the reproductions I have been able to examine.  I can just discern the Devanagari alphabet being spelt out (अ आ इ ई उ ऊ ए ओ ऐ औ अं अः ...) on the right clavicle.  But the rest of the writing is unclear.  I am not confident that it is even real text, although it looks superficially like Devanagari or Gujarati script.  Only an examination of the original artwork or a good reproduction would settle the matter.

Yogini Sunita's signature
Yogini Sunita was not a confident Devanagari writer, as is evidenced by her signature in the preface to her book (in Devanagari, "yodinī saunīṭā").  She could not have produced the Pranayama the drawing herself, and must have commissioned it from a source or collaborator with a confident knowledge of Sanskrit and the Devanagari script, perhaps in Bombay.

Yogini Sunita's illustration has been reproduced almost endlessly in books and now on the internet, and there are multiple modifications and interpretations.  One of the more common is a negative version, with white lines on a black background.  Others are coloured, simplified, and interpreted in various creative ways.  It appears in various contemporary yoga-themed mashups.  The word प्राणायाम is often masked out.  The image is often shown as a representation not primarily of breath control, but of the nodes and tubes of the spiritual body (cakras and nāḍīs).

Suzanne Newcombe describes how Yogini Sunita's early death meant that her methods and ideas did not spread as widely as those of other 20th century yoga teachers.  Nevertheless, the Pranayama illustration from her 1965 book has become one of the most widely-known images of yoga in the 21st-century (Google images).

Friday, January 31, 2014

How To Fix A Non-Bootable Ubuntu System Due To Broken Updates Using A LiveCD And Chroot

----
How To Fix A Non-Bootable Ubuntu System Due To Broken Updates Using A LiveCD And Chroot
// Web Upd8 - Ubuntu / Linux blog

If your Ubuntu system doesn't boot because of some broken updates and the bug was fixed in the repositories, you can use an Ubuntu Live CD and chroot to update the system and fix it.

1. Create a bootable Ubuntu CD/DVD or USB stick, boot from it and select "Try Ubuntu without installing". Once you get to the Ubuntu desktop, open a terminal.

2. You need to find out your root partition on your Ubuntu installation. On a standard Ubuntu installation, the root partition is "/dev/sda1", but it may be different for you. To figure out what's the root partition, run the following command:

sudo fdisk -l

This will display a list of hard disks and partitions from which you'll have to figure out which one is the root partition.

To make sure a certain partition is the root partition, you can mount it (first command under step 3), browse it using a file manager and make sure it contains folders that you'd normally find in a root partition, such as "sys", "proc", "run" and "dev".

3. Now let's mount the root partition along with the /sys, /proc, /run and /dev partitions and enter chroot:
sudo mount ROOT-PARTITION /mnt
for i in /sys /proc /run /dev; do sudo mount --bind "$i" "/mnt$i"; done
sudo cp /etc/resolv.conf /mnt/etc/
sudo chroot /mnt
Notes:
ROOT-PARTITION is the root partition, for example /dev/sda1 in my case - see step 2;the command that copies resolv.conf gets the network working, at least for me (using DHCP); if you get an error about resolv.conf being identical when copying it, just ignore it.
Now you can update the system - in the same terminal, type:
sudo apt-get update
sudo apt-get upgrade

Since you've chrooted into your Ubuntu installation, the changes you make affect it and not the Live CD, obviously.

If the bug that caused your system not to boot is happening because of some package in the Proposed repositories, the steps above are useful, but you'll also have to know how to downgrade the packages from the proposed repository - for how to do that, see: How To Downgrade Proposed Repository Packages In Ubuntu

Refrences: 1, 2, 3

Originally published at WebUpd8: Daily Ubuntu / Linux news and application reviews.

----
Shared via my feedly reader

Monday, January 20, 2014

Fwd: Work Flows and Wish Lists: Reflections on Juxta as an Editorial Tool

---------- Forwarded message ----------
From: Dominik Wujastyk <wujastyk@gmail.com>
Date: 18 January 2014 21:58
Subject: Work Flows and Wish Lists: Reflections on Juxta as an Editorial Tool
To: Philipp André Maas <Philipp.A.Maas@gmail.com>, Alessandro Graheli <a.graheli@gmail.com>, Karin Preisendanz <karin.preisendanz@univie.ac.at>, Dominik Wujastyk <wujastyk.cikitsa@blogspot.com>


Some interesting reflections on Juxta...
I have had the opportunity to use Juxta Commons for several editorial projects, and while taking a breath between a Juxta-intensive term project last semester and my Juxta-intensive MA thesis this semester, I would like to offer a few thoughts on Juxta as an editorial tool.
For my term project for Jerome McGann's American Historiography class last semester, I conducted a collation of Martin R. Delany's novel, Blake, or, The Huts of America, one of the earliest African American novels published in the United States.Little did I know that my exploration would conduct me into an adventure as much technological as textual, but when Professor McGann recommended I use Juxta for conducting the collation and displaying the results, that is exactly what happened. I input my texts into Juxta Commons, collated them, and produced HTML texts of the individual chapters, each with an apparatus of textual variants, using Juxta's Edition Starter. I linked these HTML files together into an easily navigable website to present the results to Professor McGann. I'll be posting on the intriguing results themselves next week, but in the meantime, they can also be viewed on the website I constructed, hosted by GitHub: Blake Project home.
Juxta helped me enormously in this project. First, it was incredibly useful in helping me clean up my texts. My collation involved an 1859 serialization of the novel, and another serialization in 1861-62. The first, I was able to digitize using OCR; the second, I had to transcribe myself. Anyone who has done OCR work knows that every minute of scanning leads to (in my case) an average of five or ten minutes of cleaning up OCR errors. I also had my own transcription errors to catch and correct. By checking Juxta's highlighted variants, I was able to—relatively quickly—fix the errors and produce reliable texts. Secondly, once collated, I had the results stored in Juxta Commons; I did not have to write down in a collation chart every variant to avoid losing that information, as I would if I were machine- or sight-collating. Juxta's heat-map display allows the editor to see variants in-line, as well, which saves an immense amount of time when it comes to analyzing results: you do not have to reference page and line numbers to see the context of the variants. Lastly, Juxta enabled me to organize a large amount of text in individual collation sets—one for each chapter. I was able to jump between chapters and view their variants easily.
As helpful as Juxta was, however, I caution all those new to digital collation that no tool can perfectly collate or create an apparatus from an imperfect text. In this respect, there is still no replacement for human discretion—which is, ultimately, a good thing. For instance, while the Juxta user can turn off punctuation variants in the display, if the user does want punctuation and the punctuation is not spaced exactly the same in both witnesses, the program highlights this anomalous spacing. Thus, when 59 reads
' Henry, wat…
and 61 reads
'Henry, wat…
Juxta will show that punctuation spacing as a variant, while the human editor knows it is the result of typesetting idiosyncrasies rather than a meaningful variant. Such variants can carry over into the Juxta Edition Builder, as well, resulting in meaningless apparatus entries. For these reasons, you must make your texts perfect to get a perfect Juxta heat map and especially before using Edition Starter; otherwise, you'll need to fix the spacing in Juxta and output another apparatus, or edit the text or HTML files to remove undesirable entries.
Spacing issues can also result in disjointed apparatus entries, as occurred in my apparatus for Chapter XI in the case of the contraction needn't. Notice how because of the spacing in needn t and need nt, Juxta recognized the two parts of the contraction as two separate variants (lines 130 and 131):
This one variant was broken into two apparatus entries because Juxta recognized it as two words. There is really no way of rectifying this problem except by checking and editing the text and HTML apparatuses after the fact.
I mean simply to caution scholars going into this sort of work so that they can better estimate the time required for digital collation. This being my first major digital collation project, I averaged about two hours per chapter (chapters ranging between 1000 and 4000 words each) to transcribe the 61-62 text and then collate both witnesses in Juxta. I then needed an extra one or two hours per chapter to correct OCR and transcription errors.
While it did take me time to clean up the digital texts so that Juxta could do its job most efficiently, in the end, Juxta certainly saved me time—time I would have spent keeping collation records, constructing an apparatus, and creating the HTML files (as I wanted to do a digital presentation). I would be remiss, however, if I did not recommend a few improvements and future directions.
As useful as Juxta is, it nevertheless has limitations. One difficulty I had while cleaning my texts was that I could not correct them while viewing the collation sets; I had, rather, to open the witnesses in separate windows.
The ability to edit the witnesses in the collation set directly would make correction of digitization errors much easier. This is not a serious impediment, though, and is easily dealt with in the manner I mentioned. The Juxta download does allow this in a limited capacity: the user can open a witness in the "Source" field below the collation visualization, then click "Edit" to enable editing in that screen. However, while the editing capability is turned on for the "Source," you cannot scroll in the visualization—and so navigate to the next error which may need to be corrected.
A more important limitation is the fact that the Edition Starter does not allow for the creation of eclectic texts, texts constructed with readings from multiple witnesses; rather, the user can only select one witness as the "base text," and all readings in the edition are from that base text.
Most scholarly editors, however, likely will need to adopt readings from different witnesses at some point in the preparation of their editions. Juxta's developers need to mastermind a way of selecting which reading to adopt per variant; selected readings would then be adopted in the text in Edition Starter. For the sake of visualizing, I did some screenshot melding in Paint of what this function might look like:
Currently, an editor wishing to use the Edition Starter to construct an edition would need to select either the copy-text or the text with the most adopted readings for the base text. The editor would then need to adopt readings from other witnesses by editing the the output DOCX or HTML files. I do not know the intricacies of the code which runs Juxta. I looked at it on GitHub, but, alas! my very elementary coding knowledge was completely inadequate to the task. I intend to delve more as my expertise improves, and in the meantime, I encourage all the truly code-savvy scholars out there to look at the code and consider this problem. In my opinion, this is the one hurdle which, once overcome, would make Juxta the optimal choice as an edition-preparation tool—not just a collation tool. Another feature which would be fantastic to include eventually would be a way of digitally categorizing variants: accidental versus substantive; printer errors, editor corrections, or author revisions; etc. Then, an option to adopt all substantives from text A, for instance, would—perhaps—leave nothing to be desired by the digitally inclined textual editor. I am excited about Juxta. I am amazed by what it can do and exhilarated by what it may yet be capable of, and taking its limitations with its vast benefits, I will continue to use it for all future editorial projects.
Stephanie Kingsley is a second-year English MA student specializing in 19th-century American literature, textual studies, and digital humanities. She is one of this year's Praxis Fellows [see Praxis blogs] and Rare Book School Fellows. For more information, visit http://stephanie-kingsley.github.io/, and remember to watch for Ms. Kingsley's post next week on the results of her collation of Delany's Blake.
----
Shared via my feedly reader
Dominik Wujastyk, from Android phone.

Wednesday, January 15, 2014

Zooniverse and Intelligent Machine-assisted Semantic Tagging of Manuscripts

I'm very impressed by the technology being used in the War Diaries Project.  To see what I mean, click on "Get Started" and try the guided tutorial.

Once there's a critical mass of digitized Sanskrit manuscripts available, I think it would be very interesting to contact the people at Zooniverse and discuss the possiblility of a Sanskrit MS-tagging project, like the War Diaries.

Tuesday, December 17, 2013

Tools for cataloguing Sanskrit manuscripts, no.1



In the post-office today I saw this piece of board that's used as a size-template to quickly assess which envelope to choose.  This is a formalized version of the same tool that I used for the many years that I spent cataloguing and packing Sanskrit manuscripts at the Wellcome Library in London.  I made a piece of board with three main size-outlines, for MSS of α, β, γ sizes.  Anything larger than γ counted as δ.  Palm-leaf MSS were all ε.

It was nice to see the same tool being used for a similar job, in an Austrian post-office!

Friday, December 13, 2013