Monthly Archives

9 Articles

Posted by Dirk Alvermann on

mixed layouts

Release 1.7.1

The CITlab Advanced Layout Analysis handles most “ordinary” layouts well – in 90% of the cases. Let’s talk about the remaining 10%.

We already discussed how to proceed in order to avoid trouble with the Reading Order. But what happens if we have to deal with really mixed – crazy – layouts, e.g. concept writings?

With complicated layouts, you’ll quickly notice that the manually drawn TRs overlap. That’s not good – because in such overlapping text regions the automatic line detection doesn’t work reliably. This problem is easily solved because TRs can have shapes other than square. They can be drawn as polygons and are therefore easily separated from each other.

It makes sense to add structural tags if there are many text regions in order to be able to distinguish them better. You can also assign them to certain processing routines during later processing. This is a small effort with great benefits, because the structural tagging is not more complex than the tagging in context.

Tips & Tools
Automatic line detection can be a real challenge here. Sections where you can already predict (with a little experience) that this won’t happen are best handled manually. For automatic line detection, CITlab Advanced should be configured so that the default setting is replaced by “heterogeneous”. The LA will now take both horizontal and vertical or skewed and oblique lines into account. This will take a little longer, but the result will be better.

If such complicated layouts are a continuous characteristic of your material, then it is worth designing a P2PaLA training course. This will create your own layout analysis model that is tailored to the specific challenges of your material. By the way, structure tagging is the basic requirement for such training.

Posted by Dirk Alvermann on

First volumes with decisions of the Wismar High Court online

Last week we were able to provide the first volumes with the opinions of the assessors of the High Royal Tribunal to Wismar – the final Court of appeal in the German territories of the Swedish Crown. Assessors  is what the judges at the tribunal are called. Since the Great Nordic War there was only a panel of four judges instead of eight. The Deputy President assigned them the cases in which they should form a legal opinion. As at the Reichskammergericht at Wetzlar, speakers and co-referees were appointed for each case, who formulated their opinions in writing and discussed them with their colleagues.  If the votes of the two judges were in agreement, the consensus of the remaining colleagues was only formally requested in the court session. In addition, all relations had to be checked and confirmed by the Deputy President. If the case was more complicated, all assessors expressed their opinion on the verdict. These reasons for the verdict are recorded in the collection of so-called “Relationes”.

These relations are a first-class source for the history of law, since they refer first to the course of the conflict in a narrative and then propose a judgment.  Here we can understand both the legal bases in the justifications and the everyday life of the people in the narratives.The text recognition was realized with an HTR-model that was trained on the manuscripts of 9 different judges of the royal tribunal. The training set consisted of 600,000 words. Accordingly, the accuracy rate of handwritten text recognition is good, which in this case is about 99%.

The results can be seen here. How to navigate in our documents and how the full text search works is explained here.

Who were the judges?

In the second half of the 18th century there was a new generation of judges. At the end of the 1750s / at the beginning of the 1760s, justice  at the tribunal was administered by: Hermann Heinrich von Engelbrecht (1709-1760), since 1745 as Assessor, since 1750 as Deputy President, Bogislaw Friedrich Liebeherr (1695-1761), since 1736 as Assessor, Anton Christoph Gröning (1695-1773), since 1749 as Assessor, Christoph Erhard von Corswanten (about 1708-1777), since 1751 Assessor, since 1761 Deputy President, Carl Hinrich Möller (1709-1759), since 1751 as Assessor, Joachim Friedrich Stemwede (about 1720-1787), since 1760 as Assessor, Johann Franz von Boltenstern (1700-1763), since 1762 as Assessor, Johann Gustrav Friedrich von Engelbrechten (1733-1806), between 1762 and 1775 as Assessor and Augustin von Balthasar (1701-1786), since 1763 as Assessor, since 1778 as Deputy President.

Posted by Elisabeth Heigl on

Generic vs. specialized model

Release 1.7.1

Did you notice in the graph for the model development that the character error rate (CER) of the last model got slightly worse again? Despite the fact that we had significantly increased the GT input? We had about 43,000 more words in training but the average CER deteriorated from 2.79% to 3.43%. We couldn’t really explain that.

At this point we couldn’t get any further with more and more GT. So we had to change our training strategy. So far we had trained large models, with writings from a total period of 70 years and more than 500 writers.

Our first suspicion fell on the concept writings, of which we already knew that the machine (LA and HTR) – just like ourselves – had its problems with it. During the next training we excluded these concept writings and trained exclusively with “clean” office writings. But that didn’t lead to a noticeable improvement: the Test Set-CER dropped from 3.43% to just 3.31%.

In the following trainings, we additionally focused on a chronological seuqencing of the models. We split our material and created two different models: Spruchakten_M_3-1 (Spruchakten 1583-1627) and Spruchakten_M_4-1 (Spruchakten 1627-1653).

With these new specialized models we actually achieved an improvement of the HTR – where the generic model was no longer sufficient. In the test sets several pages showed an error rate of less than 2 %. In the case of the model M_4-1, many CERs of single pages remained below 1 % and two pages were with 0 % even free of any errors.

Whether an generic or specialized model will help and produce better results depends a lot on the size and composition of the material. In the beginning, when you are keen to progress as quickly as possible (the more, the better), an generic model is useful. However, if that reaches its limits, you shouldn’t “overburden” the HTR any further, but instead specialize your models.

Posted by Elisabeth Heigl on

Transkribus as an instrument for students and professors

In this year’s 24-hour lecture of the University of Greifswald Transkribus and our digitization project will be presented. Elisabeth Heigl, who is involved in the project as a academic assistant, will present some of the exciting cases from the rulings of the law faculty of Greifswald. If you are interested in the history of law, join  the lecture at lecture hall 2, Audimax (Rubenowstraße 1) on 16.11.2019 at 12:00.
Read the whole program of the 24-hour lecture here.

Posted by Dirk Alvermann on

Transcribus in Chicago

Transkribus will be presented at this year’s meeting of the Social Sciences History Association (SSHA) in Chicago. Günter Mühlberger will not only present the potential of Transkribus, but also first results and experiences. These results come from the processing of the cadastral protocols of the Tiroler Landesarchiv and our digitization project. He will pay special attention to the training of HTR models and the chances of keyword spotting.  The lecture will take place on 21.11. at 11:00 am under the title: ‘Handwritten Text Recognition and Keyword Spotting as Research Tools for Social Science and History’ in Session 31 (Emerging Methods: Computation/Spatial Econometrics).

Posted by Anna Brandt on

Feedback

The blog “Rechtsgeschiedenis” (Otto Vervaart/Utrecht) has given a detailed discussion about the project ‘Rechtssprechung im Ostseeraum’ and our blog. It describes our work with Transkribus, the project itself, as well as the page where we present the results and the blog – a good overview from a user’s perspective.

Posted by Dirk Alvermann on

How to create test sets and why they are important, #2

Release 1.7.1

What is the best procedure for creating test sets?
In the end, everyone can find their own way. In our project, the pages for the test sets are already selected during the creation of the GT. They receive a special edit status (Final) and are later collected in separate documents. This ensures that they will not accidentally be put into  training. Whenever new GT is created for future training, the material for the test set is also extended at the same time. So both sets grow in proportion.

For the systematic training we create several Documents, which we call “test sets” and which are each related to a single Spruchakte (one year). For example, we create a “test set 1594” for the Document of the Spruchakte 1594. Here, we place representatively selected images, which should reflect the variety of writers as exactly as possible. In the “mother document” we mark the pages selected for the test set as “Final” to make sure that they will not be edited there in the future. We have not created a separate test set for each singel record or year, but have proceeded in five-year steps.

Since a model is often trained over many rounds, this procedure also has the advantage that the test set always remains representative. The CERs of the different versions of a model can therefore always be compared and observed during development, because the test is always executed on the same (or extended) set. This makes it easier to evaluate the progress of a model and to adjust the training strategy accordingly.

Transkribus also stores the test set used for each training session in the affected collection independently. So you can always fall back on it.
It is also possible to select a test set just before the training and simply assign individual pages of the documents from the training material to the test set. This may be a quick and pragmatic solution for the individual case, but it is not suitable for the planned development of powerful models.

Posted by Elisabeth Heigl on

Search and Browse documents and HTR-Results in the Digital Library MV

We present our results in the Digital Library Mecklenburg-Vorpommern. Here you will find the digital versions with their corresponding transcriptions.

If you have selected a document– as here for example the Spruchakte of 1586 – you will then see its first page in the centre of the display. The box above allows you to switch to the next, the previous or any other page of your choice (1.). You can rotate the image (3.) zoom in or out (5.), choose two-page mode (2.) and switch to full screen mode (4.).

On the left side you can select different view options („Ansicht“). Here you can, for example, display all images at once instead of just one page („Seitenvorschau“) or you can read the transcribed text right away („Volltext“).

If you want to navigate in the structure of the file, first open the structure tree of the file in the bottom left box using the small plus symbol. Then you can select any given date.

Are you looking for a certain name, a place or some other term? Simply enter it in the search box on the left („Suche in: Spruchakte 1586“). If the term occurs in the file, the full-text hits („Volltexttreffer“), meaning all places where your search term occurs in the text, are indicated.

If you select one of the hits here, your search term will be marked yellow on the digital image. For now, highlighting the search result will only work on the digitized page, not yet in full text.

Tips & Tools
Display the found full text hits in a new tab (right mouse button). Navigating forwards and backwards in the Digital Library is still a bit tricky. This way you can be sure that you will always return to your previous selection.

Posted by Dirk Alvermann on

How to create test sets and why they are important, #1

Release 1.7.1

If we want to know how much a model has learned in training, we have to  test it. We do this with precisely defined test sets. Test sets – like the training set – contain exclusively Ground Truth. However, we make sure that this GT has never been used to train the model. So the model does not “know” this material. This is the most important characteristic of test sets. A text page that has already been used as training material will always be better read by the model than one it is not yet “familiar” with. This can easily be proved experimentally. So if you want to get valid statements about CERs and WER, you need “non-corrupted” test sets.

It is also important that a test set is representative. As long as you train an HTR model for a single writer or an individual handwriting, it’s not difficult – after all, it’s always the same hand. As soon as there are several writers involved, you have to make sure that all the individual handwritings used in the training material are also included in the test set. The more different handwritings are trained in a model, the larger the test sets will be.

The size of the test set is another factor that influences representativity. As a rule, a test set should contain 5-10% of the training material. However, this rule of thumb should always be adapted to the specific requirements of the material and the training objectives.

To illustrate this with two examples: Our model for the Spruchakten from 1580 to 1627 was trained with a training set of almost 200,000 words. The test set contains 44,000 words. This is of course a very high proportion of about 20%. It is due to the fact that material of about 300 different writers was trained in this model, which must also be represented in the test set. – In our model for the judges’ opinions of the Wismar Tribunal, there are about 46,000 words in the training set, the test set contains only 2,500 words, i.e. a share of about 5%. However, we only have to deal with 5 different writers. In order to have a representative test set, the material is sufficient.