Yearly Archives

34 Articles

Posted by Elisabeth Heigl on

Language Models

Release 1.10.1

We talked about the use of dictionaries in an earlier post and mentioned that the better an HTR model is (CER below 7%), the less useful a dictionary is for the HTR result.
This is different when using Language Models, which are available in Transkribus since December 2019. Like dictionaries, Language Models are generated from the ground truth used in each HTR training. Unlike dictionaries, Language Models do not aim at identifying individual words. Instead, they determine the probability of a word sequence or the frequent combination of words and expressions in a particular context.
Unlike dictionaries, the use of Language Models always leads to much better HTR results. In our tests, the average CER improved by as much as 1% compared to the HTR result without the Language Model – consistently, on all test sets.

Tips & Tools: The Language Model can be selected when configuring the HTR. Unlike dictionaries, Language Models and HTR model cannot be freely combined. Each HTR model has its uniquely genereated Language Model and only this one can be used.

Posted by Anna Brandt on

Tools in the Layout tab

Release 1.10.

The layout tab has two more tools, which we did not mention in our last post. They are especially useful for correcting the layout and save you from annoying detail work.
The first one corrects the Reading Order. If one or more text regions are selected, this tool automatically arranges the child shapes, in this case the lines and baselines, according to their position in the coordinate system of the page. So the reading order starts at the top left and continues counting from there to the bottom right.  In the example below, a TR was split but the RO of the marginal notes got mixed up. This tool saves you now from renaming each BL individually.

The second tool (“assign child shapes”) helps to assign the BL to the correct TR. This can happen when cutting text regions or baselines that run through multiple TRs. Each BL then has to be marked in the layout tab and moved to the correct TR. For assigning them automatically to the corresponding TR you just select the TR where the BLs belong to and start the tool.

Posted by Dirk Alvermann on

“between the lines” – Handling of inserts

At least as often as deletions or blackenings, there are overwritings or inserted text passages written between the lines. It is useful in two aspects to clarify at the beginning of a project how these cases should be handled.

In this simple example you can see how we handle such cases.

Since we include erasures and blackening in both layout and text, it is a logical step to treat overwrites and insertions in the same way. Usually, such passages are already provided with separate baselines during the automatic layout analysis. Every now and then you have to make corrections. In any case, each insertion is treated by us as a separate line and is also taken into account accordingly in the reading order.

Under no circumstances should you transcribe overwrites or insertions above the line instead of deletions. This would falsify the training material, even though the presentation of the text would of course be more pleasing to the human eye.

Posted by Dirk Alvermann on

Treatment of erasures and blackenings

HTR models treat erased text the same as any other. They know no difference and therefore always provide a reading result. In fact, they are amazingly good at it, and read useful content even where a transcriber would have given up long ago.

The simple example shows an erased text that is still legible to the human eye and which the HTR model has read almost without error.

Because we have already seen this ability of the HTR models in other projects, we decided from the beginning to transcribe as much of the erased and blacked out text as possible in order to use the potential of the HTR. The corresponding passages are simply tagged as text-style “strike through” in the text and thus remain recognizable for possible internal search operations.

If you don’t want to make this effort, you have the possibility to intervene in the layout at the corresponding text passages that contain erasures or blackenings and, for example, delete the baselines or shorten them accordingly. Then the HTR does not “see” such passages and cannot recognize them. But this is not less complex than the way we have chosen.

Under no circumstances should you work on erasures with the “popular” expression “[…]”. The transcribers know what the deletion sign means, but the HTR learns something completely wrong here, when many such passages come into training.

Posted by Elisabeth Heigl on

Test samples – the impartial alternative

If a project concept does not allow the strategic planning and organization of test sets, there is a simple alternative: automatically generated samples. Samples are also a valuable addition to the manually created test sets, because here it is the machine that decides which material is included in the test and which is not. Transkribus can select individual lines from a set of pages that you have previously provided. So it’s a more or less random selection. This is worthwhile for projects that have a lot of material at their disposal – including ours.

We use samples as a cross-check to our manually created test sets and because samples are comparable to the statistical methods of random sampling, which the DFG also recommends in its rules of practice for testing the quality of OCR.

Because HTR – unlike OCR – is line-based, we modify the DFG’s recommendation somewhat. I will explain this with an example: For our model Spruchakten_M-3K, a check should be carried out on the reading accuracy achieved. For this purpose we created a sample, exclusively from untrained material of the whole period for which the model works (1583-1627). To do this, we selected every 20th page from the entire data set, obtainimg a subset of 600 pages. It represents approximately 3.7% of the total 16,500 pages of material available for this period.  All this is done on your own network drive. After uploading this subset to Transkribus and processing it with the CITlab Advanced LA (16.747 lines were detected), we let Transkribus create a sample from it. It contains 900 randomly selected lines. This is about 5% of the subset. This sample was now provided with GT and used as a test set to check the model.

And this is how it works in practice: The “Sample Compare” function is called up in the “Tools” menu. Select your subset in the collection and add it to the sample set pressing the “add to sample” button. Then specify the number of lines Transkribus should select from the subset. Here you should select at least as many lines as there are pages in the subset, so that each page has one test line.

In our case, we decided to use a factor of 1.5 just to be sure. The program now selects the lines independently and compiles a sample from them, which is saved as a new document. This document does not contain any pages, but only lines. These must now be transcribed as usual to create GT. Afterwards any model can be tested on this test set using the Compare function.

Posted by Elisabeth Heigl on

Image resolution

A technical parameter that needs to be uniformly determined at the beginning of the scanning process is the resolution of the digital images, that means how many pixels/dots per inch (dpi) the scanned image should have.

The DFG Code of Practice for Digitisation generally recommends 300 dpi (p. 15). For “text documents with the smallest significant character” of up to 1.5 mm, however, a resolution of 400 dpi can be selected (p. 22). Indeed, in the case of manuscripts – especially concept writings – of the early modern period, the smallest character components can result in different readings and should therefore be as clearly identifiable as possible. We have therefore chosen 400 dpi.

In addition to the advantages for deciphering the scripts, however, the significantly larger storage format of the 400 (around 40,000 KB/img.) compared to the 300 (around 30,000 KB/img.) files must be taken into account and planned for!

The selected dpi number also has an impact on the process of automatic handwritten text recognition. Different resolutions lead to different results of the layout analysis and the HTR. To verify this thesis, we have selected three pages from a speech file of 1618, scanned them in 150, 200, 300 and 400 dpi each, processed all pages in transcript and determined the following CERs:

Seite/dpi 150 200 300 400
2 3,99 3,5 3,63 3,14
5 2,1 2,37 2,45 2,11
9 6,73 6,81 6,52 6,37

Generally speaking, a lower resolution therefore means a decrease in CERs – though within the range of less than one percent.To be honest, such comparisons of HTR results are somewhat misleading. The very basis of the HTR – the layout analysis – leads to latently different results at different resolutions, which in turn influences the HTR results (apparently cruder analyses achieve worse HTR results) but also the GT production itself (e.g. with cut-off words).

In our example you see the same image in different resolutions. The result of the CITlab Advanced LA changes as the resolution increases. The initial “V” of the first line is no longer recognized at higher resolution, while line 10 is increasingly “torn apart” at higher resolution. With increasing resolution, the LA becomes more sensitive – this can have advantages and disadvantages at the same time.

It is therefore important to have a uniform dpi number for the entire project so that the average CER values and the entire statistical material can be reliably evaluated.

Posted by Dirk Alvermann on

Structural tagging – what else you might do with it (Layout and beyond)

In one of the last posts you read how we use structural tagging. Here you can find how the whole toolbox of structural tagging works in general. In our project it was mainly used to create an adapted LA model for the mixed layouts. But there is even more potential in it.
Who doesn’t know the problem?

There are several, very different handwritings on one page and it becomes difficult to get consistently good HTR results. This happens most often when a ‘clean’ handwriting has been commented in concept handwriting by another writer. Here is an example:

The real reason for the problem is that HTR has only been executed at the page level so far. This means that you can have one page or several pages read either with one or the other HTR model. But it is not possible to read with two different models, which are adapted to the respective handwritings.

Since version 1.10. it is possible to apply HTR models on the level of text regions instead of just assigning them to pages. This allows the contents of individual specific text regions on a page to be read using different HTR models. Structure tagging plays an important role here, for example, in the case of text regions with script styles that differ from the main text. These are tagged with a specific structure tag, to which a special HTR model is then assigned. Reason enough, therefore, to take a closer look at structure tagging.

Posted by Anna Brandt on

P2PaLA – line detection and HTR

Release 1.9.1

As already mentioned in a previous post, we noticed in the course of our project that the CITLabAdvanced-LA does not optimally identify the layout in our material. This happens not only on the ‘bad’ pages with mixed layouts, but also on simple layouts, i.e. on pages without any marginalias at the edge, great deletions in the text or similar. Here the automatic LA recognizes the TR correctly, but the baselines are often faulty.

This is not only confusing when the full text is displayed later; an insufficient LA also influences the result of the HTR. No matter how good your HTR model is: if the LA does not offer adequate quality, it is a problem.

The HTR does not read the single characters, but works line based and should recognize patterns. But if the line detection did not identify the lines correctly (in case letters or words were not recognized by the LA) this often produces wrong HTR results. This can have dramatic effects on the accuracy rate of a page or an entire document, as our example shows.


1587, page 41

For this reason we have trained a P2PaLA model which also detects BLs. That was very helpful. It is not possible to calculate statistics like CERs for these layout models, but from the visual point it seems to work almost error-free on ‘simple’ pages. In addition, a postprocessing is no longer necessary in many cases.

The training material for such a model is created in a similar way to models that should recognize TRs only. The individual baselines do not have to be tagged manually for the structural analysis, even if the model does so later in order to assign them to the tagged TR. With the support of the Transkribus team and a training material of 2500 pages, we were able to train the structural model that we use today instead of the standard LA.

Posted by Anna Brandt on

P2PaLA – Postprocessing

Release 1.9.1

Especially at the beginning of the development of a structure model, it occurred to us that the model recognized every irregularity in the layout as a TR. This leads to excessive – and unnecessary – many text regions. Many of these TRs were also extremely small.

The more training material you invest, the smaller the problem. In our case these mini TRs disappeared, after we had trained our model with about 1000 pages. Until then, they are annoying because removing them all by hand is tedious.

To reduce this labour you have two options. Firstly, starting the P2PaLA you can determine how large the smallest TR is allowed to be. For this you have to select the corresponding value in the “P2PaLA structure analysis tool” before starting the job (“Min area”).

If this option does not bring the expected success, there is the option “remove small textregions”. You will find this on the left toolbar, under the item “other segmentation tools”. In the menu you can set the pages on which the filter should run as well as the size of the TR to be removed.  The size is calculated in “Threshold percentage of image size”. Here the value can be calibrated finer than with the above mentioned option. If the images, as with our material, often have small notes, for example the marginalias where there is only a single word in a TR, then the smallest or second smallest value possible should be chosen. We usually use the “Threshold percentage” of 0.005.

Even with a good structural model, it may still be possible that individual TRs have to be manually merged, split or removed, but to a much lesser extent than the standard LA would require.

Tips & Tools
Important: If you want to be sure that you don’t remove too many TRs, you can start with a “dry run”. Then the number of potentially removable TRs will be listed. As soon as you uncheck the box, the affected TRs will be deleted immediately.

Posted by Anna Brandt on

P2PaLA – Training for Textregions

Release 1.9.1

At another place of these blog you can find information and tips for structure tagging. This kind of tagging can be good for a lot of things – the following is about its use for an improved layout analysis. Because structure tagging is an important part of training P2PaLA models.

With our mixed layouts the standard LA simply had to fail. The material was too extensive for a manual creation of the layout. So we decided to try P2PaLA. For this we created training material for which we selected particularly “difficult” but at the same time “typical” pages. These were pages that contained, in addition to the actual main text, comments and additions and the like.


coll: UAG Strukturtagging, doc. UAG 1618-1, image 12

For the training material only the correctly drawn and tagged text regions are important. No additional line detection or HTR is required. However, it doesn’t bother either, so you can include pages that have already been completely edited in the training. However, if you take new pages on which only the TR has to be drawn and tagged, you’ll be faster. Then you can prepare eighty to one hundred pages for training in one hour.

While we had tagged seven different structure types with our first model, we later reduced the number to five. In our experience, a too strong differentiation of the structure types has a rather negative effect on the training.

Of course, the success of the training also depends on the amount of training material you invest. According to our experience (and based on our material) you can make a good start with 200 pages, with 600 pages you get a model you can already work with; from 2000 pages on it is very reliable.

Tips & Tools
When you create the material for structure training, it is initially difficult to realize that this is not about content. That means no matter what the content is, the TR in the middle is always the paragraph. Even if there is only one note in the middle and the concept is much longer and more important. This is the only way to really recognize the necessary patterns during training.