Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

MM5 supports AI Video Transcription (by use of Azure Cognitive Services) and can be saved to a note metadata field.

Said metadata field must have autotranslate set to true, and overwrite existing true. It is also very much recommended to limit the field to the "Video" asset type, otherwise a normal note field will be available for other asset types. 

Requirements and limitations

It requires that the user has the Ai_Add role and Landmarks visual feature is enabled.

it requires the source video to be in either of the following languages: English, Spanish, French, German, Italian, Chinese (Simplified), Japanese, Russian, and Brazilian Portuguese.

Info

English will be used if the Video video Indexer cannot determine the language confidently.

Configuration

Navigate to the MM5 Config Manager, expand the "AI Config"  and define the following configuration:Image Removed

Usage

Scene detection can be used both in the single and the multi asset editor:

Example in asset editor:

Image Removed

Example in multi asset editor: 

Image Removed

Image Added

(The "Transcript (Video)" is a Note field - as previously stated)


Usage

Enter any video, which has one of the previously defined spoken languages.

Locate your field, and press the "Generate AI transcript" button

Image Added

An async job will now start - meaning that you do not have to be on the asset for it to function.

Once the job is done (Can take several minutes, depending on the length of your video) a notification will appear, stating that the job is finished.

You may now press the "Preview" button, which will show you the auto-generated content. You may update this content as you see fit.

Please be aware that pressing the "Generate AI transcript" button again, will effectively "revert" all your manual changes, as it will be the auto-transcript again.