MM5 supports AI Video Transcription (by use of Azure Cognitive Services) and can be saved to a note metadata field.
Requirements and limitations
It requires that the user has the Ai_Add role and Landmarks visual feature is enabled.
it requires the source video to be in either of the following languages: English, Spanish, French, German, Italian, Chinese (Simplified), Japanese, Russian, and Brazilian Portuguese.
Info |
---|
English will be used if the Video Indexer cannot determine the language confidently. |
Configuration
Navigate to the MM5 Config Manager, expand the "AI Config" and define the following configuration:
Usage
Scene detection can be used both in the single and the multi asset editor:
Example in asset editor:
Example in multi asset editor:
(The "Transcript (Video)" is a Note field - as previously stated)
Usage
Enter any video, which has one of the previously defined spoken languages.
Locate your field, and press the "Generate AI transcript" button
An async job will now start - meaning that you do not have to be on the asset for it to function.
Once the job is done (Can take several minutes, depending on the length of your video) a notification will appear, stating that the job is finished.
You may now press the "Preview" button, which will show you the auto-generated content. You may update this content as you see fit.
Please be aware that pressing the "Generate AI transcript" button again, will effectively "revert" all your manual changes, as it will be the auto-transcript again.