## Introduction A team of researchers has developed a new approach to digitize and analyze historical documents using optical character recognition (OCR) and large language models (LLM). The project aims to create an automated pipeline that integrates historical data with existing databases. The research focuses on the Leiden University professors and curators books written between 1983 and 1985, which contain biographic data about these professionals. The goal of the project is to design an automated system that integrates OCR, LLM-based interpretation, and database linking to harmonize data from historical document images with existing high-quality database records. The team used OCR techniques, generative AI decoding constraints that structure data extraction, and database linkage methods to process typewritten historical records into a digital format. OCR achieved a Character Error Rate (CER) of 1.08 percent and a Word Error Rate (WER) of 5.06 percent, while JSON extraction from OCR text achieved an average accuracy of 63 percent and, based on annotated OCR, 65 percent. This indicates that generative AI somewhat corrects low OCR performance. The record linkage algorithm linked annotated JSON files with 94% accuracy and OCR-derived JSON files with 81%. This study contributes to digital humanities research by offering an automated pipeline for interpreting digitized historical documents, addressing challenges like layout variability and terminology differences, and exploring the applicability and strength of an advanced generative AI model.