Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 9 Next »

It is possible extract the text contents of PDFs and images using Optical Character Recognition (OCR). The contents can then be included in searches.

This page describes how to enable search in asset contents. The setup consists of two steps:

  1. Content extraction with Microsoft Azure Cognitive Services.

  2. Including asset contents in searches.

1. Content extraction with Microsoft Azure Cognitive Services

The content extraction of PDFs and images relies on Microsoft Azure Cognitive Services. Thus, a Computer Vision resource in Azure must be used.

A new Computer Vision client can be created with the following steps:

  1. Login to the Azure portal (https://portal.azure.com/).

  2. Search for “Cognitive Services”

  3. Click “Add”, to add a new Cognitive Service.

  4. Search for “Computer Vision” and create a new client.

The Computer Vision resource has a key and a server URI, which will be needed shortly. These can be found by navigating to the Computer Vision resource and locating “Keys and Endpoints”.

Content extraction can now be enabled in the Cognitive Service which is part of Digizuite Core.

  1. On the server where the DAM Center is installed, navigate to the Cognitive Service directory (typically “Webs/<yourDAM>/DigizuiteCore/cognitiveservice”).

  2. Edit the “appsettings.json”-file. The following parameters in the “ComputerVisionDetails”-section are relevant:

Parameter

Description

OcrKey

The key from the Computer Vision resource created above.

OcrServerUri

The URI from the Computer Vision resource created above. It is called “Endpoint” in the “Keys and endpoints” section of the Computer Vision resource.

OcrExtractFromPdf

If true, the text contents of PDF files are extracted when the PDF files are uploaded to the DAM.

OcrExtractFromImage

If true, the text contents of images are extracted when the images are uploaded to the DAM.

OcrLetAzureRequestFiles

If false, we explicitly upload files to the Computer Vision client. Otherwise, Azure will request the files from the DAM Center. Setting this to true is expected to be more efficient, but it requires that the DAM Center can be accessed by Azure.

Thus, ensure that the DAM Center is not behind a strict firewall if this is set to true.

OcrTaskDelayLength

We regularly check the status of ongoing content extractions in the Computer Vision client. This gives the time interval between each check.

The larger the time interval is, the less requests are made to Azure. However, it then also takes more time for the extracted contents of files to be available in the “Asset Content” metafield.

You most likely don’t have to change this.

Once the information has been provided, and the “appsettings.json”-file has been saved, the contents of PDFs and/or images are extracted when PDFs/images are uploaded.

2. Including asset contents in searches

When the contents of an asset have been extracted with Microsoft Azure Cognitive Services, they are automatically written to the metafield “Asset content”.

The metafields “Asset content” and “Asset content concurrency token” are predefined and should not be manually modified if asset contents should be made searchable.

Including this metafield in the search “DigiZuite_System_Framework_Search“ as a freetext input parameter therefore includes the extracted contents of assets in freetext searches. The “Asset content” metafield can be added as a freetext input parameter by doing the following:

  1. Find “DigiZuite_System_Framework_Search“ in the ConfigManager for the product version of the product to enable this feature for. E.g. the product version of the MM5.

  2. Add a new input parameter.

    1. Locate and choose the metafield group “Content”.

    2. Choose the metafield “Asset content”, and choose the “FreeText” comparison type. Create the input parameter.

  3. Save the modified search and populate the search cache.

  • No labels