GSoC Weekly update #3

This week I’ve been working on generating a baseline glyphs file for 4 fonts: Rachana, Meera, Suruma and Lohith-Malayalam. I have selected some malayalam words from harfbuzz tree and Santhosh Thottingal’s test cases which I thought would be enough to test rendering problems. Then I started listing the glyph names of these files for each fonts in separate text files. To get the corresponding Unicode code point of each word, I wrote a small Java code. So I executed the script on each word, found all the code points and made 4 text files that contains the corresponding glyph names of the four fonts I mentioned earlier. 

Although my mentor did tell me that it is not possible to generate glyph names automatically, I wasted more than a couple of days on a Font Forge script to make it automatically output the glyph names. But that gives the glyph name only if we click on each character, which became terribly disappointing. So instead I used it to make the baseline glyphs file in the structure I want if I click on the necessary characters. But this code is trivial as far as rendering testing is concerned and will leave it out from now (Just noting it down as it wasted a very non-trivial amount of my time 😉 ).

I have modified the main C code such that it will ask the tester which font she wants and after choosing the one she needs it will output the result based on the words I have given.

But my mentor pointed out that it looks quite messy looking at codes in 3 different languages for a single framework so I’ll be re-writing my code in Python this week.

You can find my code here: https://github.com/nandajavarma/Automated-Rendering-Testing (although the README is not up-to-date)

GSoC Weekly update #2

Coding period for GSoC has started the past week and I have been working on a very simple implementation of the proposal in C and two tiny bash scripts. My code is available here: https://github.com/nandajavarma/Automated-Rendering-Testing

The first thing to be done to test using these scripts is create a file that contains a set of words to be tested to see if their rendering is correct. Here I have taken a sample test data file created by SMC a while ago (ml-harfbuzz-testdata,txt). Now pass this file through the script render_test.sh along with the necessary font file. That is:

./render_test.sh ml-harfbuzz-testdata.txt /path/to/fontfile

This will create a file named rendered_glyphs.txt that contains the output of hb-shape function of harfbuzz, i.e. the glyph name followed by some additional numbers (which will be ignored for now).

Now create a file that contains the actual glyph names of the words in the the test data wordfile. I got the data from font forge. This has to be created manually and, as of now, obeying the following structure:

[glyph11,glyph12,glyph13,…,glyph1n]

[glyph21,glyph22,glyph33,….,glyph2n]

.

.

.

Also make sure that glyph names of each word is in the same order as that of the corresponding words in the test data file. I have named it orig_glyphs.txt Once this is done, we can pass the above two files through the executable of the script rendering_testing.c, say rendering_testing. That is:

./rendering_testing orig_glyphs.txt rendered_glyphs.txt

This script will compare the glyphs in order and if it find any pairs that doesn’t match, it will write to a file, result.txt, the line number in which the word appears in the test data file. Otherwise it will tell you the renderings are perfect.

Once this is done, to see the words with wrong renderings we will have to run the third script show_rendering.sh. It takes as input the result.txt file, the test data file and also the font file. That is:

./show_rendering.sh result.txt ml-harfbuzz-testdata.txt /path/to/fontfile

This script will create png images of the wrongly rendered words in the current directory.

That is all about my scripts. But the C code is very much inefficient. It even spits segmentation faults with some files. Once I make sure that I am on the right path after discussing with my mentor, I will be working on improving my algorithm and making this code better. That would be my next week’s work.