Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 20 Next »

It is possible extract the text contents of PDFs and images using Optical Character Recognition (OCR). The contents can then be included in searches.

This page describes how to enable search in asset contents. The setup consists of two steps:

  1. Content extraction with Microsoft Azure Cognitive Services.

  2. Including asset contents in searches.

Please be aware of the important information in the bottom.

1. Content extraction with Microsoft Azure Cognitive Services

The content extraction of PDFs and images relies on Microsoft Azure Cognitive Services. Thus, a Computer Vision resource in Azure must be used.

A new Computer Vision client can be created with the following steps:

  1. Login to the Azure portal (https://portal.azure.com/).

  2. Search for “Cognitive Services”

  3. Click “Add”, to add a new Cognitive Service.

  4. Search for “Computer Vision” and create a new client.

The Computer Vision resource has a key and a server URI, which will be needed shortly. These can be found by navigating to the Computer Vision resource and locating “Keys and Endpoints”.

Content extraction can now be enabled in the Cognitive Service which is part of Digizuite Core.

  1. On the server where the DAM Center is installed, navigate to the Cognitive Service directory (typically “Webs/<yourDAM>/DigizuiteCore/cognitiveservice”).

  2. Edit the “appsettings.json”-file. The following parameters in the “ComputerVisionDetails”-section are relevant:

Parameter

Description

OcrKey

The key from the Computer Vision resource created above. (One of the “KEY” entries in the image above)

OcrServerUri

The URI from the Computer Vision resource created above. (The “Endpoint” entry in the image above)

OcrExtractFromPdf

If true, the text contents of PDF files are extracted when new PDF files are uploaded to the DAM.

OcrExtractFromImage

If true, the text contents of images are extracted when new images are uploaded to the DAM.

OcrLetAzureRequestFiles

If false, files are explicitly uploaded to the Computer Vision client. Otherwise, Azure will request the files from the DAM Center. Setting this to true is expected to be more efficient, but it requires that the DAM Center can be accessed by Azure.

Thus, ensure that the DAM Center is not behind a strict firewall if this is set to true.

OcrTaskDelayLength

We regularly check the status of ongoing content extractions in the Computer Vision client. This gives the time interval between each check.

The larger the time interval is, the less requests are made to Azure. However, it then also takes more time for the extracted contents of files to be available in the “Asset Content” metafield.

You most likely don’t have to change this.

Once the information has been provided, and the “appsettings.json”-file has been saved, the contents of PDFs and/or images are extracted when PDFs/images are uploaded.

When the contents of an asset have been extracted with Microsoft Azure Cognitive Services, they are automatically written to the metafield “Asset content”.

The metafield “Asset content” is predefined and should not be manually modified if asset contents should be made searchable.

2. Including asset contents in searches

The extracted contents of assets can be included in freetext searches by adding the metafield “Asset content” in the search “DigiZuite_System_Framework_Search“ as a freetext input parameter. The “Asset content” metafield can be added as a freetext input parameter by doing the following:

  1. Find “DigiZuite_System_Framework_Search“ in the ConfigManager for the product version of the product to enable this feature for. E.g. the product version of the MM5.

  2. Add a new input parameter.

    1. Locate and choose the metafield group “Content”.

    2. Choose the metafield “Asset content”, and choose the “FreeText” comparison type. Create the input parameter.

  3. Save the modified search and populate the search cache.

The contents of the asset types you selected in step 1 should now be included in freetext searches when new assets are uploaded.

The contents of existing assets can be extracted by republishing the assets.

Important Information

Please be aware of the following when using the Computer Vision resource:

  • The content extraction has some limitations (see https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-recognizing-text). In particular, be aware of the following limitations:

    • Supported file formats: JPEG, PNG, BMP, PDF, and TIFF.

    • For PDF and TIFF files, up to 2000 pages (only first two pages for the free tier) are processed.

    • The file size must be less than 50 MB (4 MB for the free tier) and dimensions at least 50 x 50 pixels and at most 10000 x 10000 pixels.

    • The PDF dimensions must be at most 17 x 17 inches, corresponding to legal or A3 paper sizes and smaller.

  • If an error occurs, an error message will end up in the CognitiveServices.error queue in RabbitMQ. You can retry the content extraction by moving the failed message to the CognitiveServices queue. Please be aware of the limitations above before retrying.

  • There is a cost associated with extracting contents from files. Please see https://azure.microsoft.com/en-us/pricing/details/cognitive-services/computer-vision/ for more details.

  • If the extracted contents are included in freetext searches, they are searched on equal terms with the asset titles. Thus, it might be more difficult to find an asset by title.

  • Each time an asset is republished, its contents are extracted again.

  • Repopulating search caches might take significantly more time if content extraction has been enabled.

  • Freetext searches might be slower when asset contents are included. However, we do not expect this to be significant.

  • No labels