![]() |
At the beginning of this year, we overtook that Amazon Bedrock was coming models of Twelvelabs video. Today, we announce that models are now available for video search, scenes classification, summary and extracting of knowledge with accuracy and reliability.
Twelvelabs introduced Marengo, a video insertion model skilled in performance tasks such as searching and classification, and Pegasus, a video language model that can generate text based on video. These models are trained on Amazon Sagemaker Hyperpod to provide video analysis that provides text summary, generation of metadata and creative optimization.
With Twelvelabs in Amazon Bedrock you will find specific moments using Capabilites to search for a natural language video, such as “show me the first landing of the game” or “Find a scene where the main characters first meet” and immediately skip into these exact moments. You can also create applications to understand video content by generating descriptive text, such as titles, themes, hashtags, summaries, chapters or peaks for discovering knowledge and connections without requireing predefined labels or categories.
For example, you will find recurring topics in customer feedback or on the basis of product use formulas that have not been obvious before. If you have HUDREDS or thousands of hours of video Mont, you can now turn this all -collar into a searchable Kenochable source, while the security and performance of the company.
Let’s take a look at Marengo and Pegasus videos that have published Twelvelabs.
With these models, you can transform video dives of video across industries. Producers and media editors can immediately find specific scenes or dialogue, which means you can focus more on telling rather than pushing the clocks. Marketing teams stream their advertising workflows by quickly permeizing content to resonate with different audiences, while security teams use this technology to actively identify potential risks by observing formulas across multiple video resources.
We start with Twelvelabs models in Amazon Bedrock
Before you start if you are a newcomer to use Twelvelabs, go to Amazon Bedrock Console and choose Access to the model In the lower left pane of navigation. Want to get access to the latest Twelvelabs models, ask for access Marengo puts 2.7 gold Pegasus 1.2 in Twelvelabs Available regions.
You want to use twelvelabs in Amazon Bedrock, select Cat/text pitch under Test In the left navigation pane. Choose Select the modelselect Twelvelabs as a category of a Pegasus as a model and then choose App.
To search or generate text from your video, you should upload your video to the Amazon Simple Storage Service (Amazon S3) or the input video string of the formatted basket 64.
I will use a sample video generated with Amazon Nova Reel with a challenge: “A cute raccoon playing underwater guitar.
Enter S3 Uri Your recorded video and S3 bucket ownerAnd then start the challenge: “Tell me about the video by the timeline”.
You can get the following model response:
The video opens with an underwater scene featuring a sandy seabed and swaying seaweed, creating a serene atmosphere. At around 5 seconds, a raccoon appears, swimming towards the camera with a curious expression. By 10 seconds, the raccoon is seen swimming near the seabed. The focus then shifts to a group of koi fish swimming gracefully near the seabed from 11 to 13 seconds.
At 14 seconds, the scene transitions to an ukulele resting on the seabed, with the camera zooming in to highlight its details. The raccoon reappears at 20 seconds, swimming among the koi fish, and by 23 seconds, it begins to play the ukulele. The raccoon continues to play the ukulele from 29 to 41 seconds, with the koi fish swimming around it, adding a lively and colorful touch to the underwater setting.
From 42 to 49 seconds, the raccoon is shown playing the ukulele with the koi fish swimming around it. The video concludes with a screen displaying "Video generated with Amazon Novo Reel 1.1" and "Available in Amazon Bedrock" from 50 to 54 seconds, followed by the AWS logo from 55 to 57 seconds.
Twelvelabs can be easily integrated into applications using Amazon Bedrock Invoke API, which provides a uniform AI conversational interaction interface.
Here is an example of how to use AWS SDK for Python (Boto3) with Twelvelabs Pegasus:
import boto3
import json
import base64
AWS_REGION = "us-west-2"
MODEL_ID = "us.twelvelabs.pegasus-1-2-v1:0" # OR "eu.twelvelabs.pegasus-1-2-v1:0" if it is doing cross region inference in europe
VIDEO_PATH = "sample.mp4"
def read_file(file_path: str) -> str:
"""Read a file and return as base64 encoded string."""
try:
with open(file_path, 'rb') as file:
file_content = file.read()
return base64.b64encode(file_content).decode('utf-8')
except Exception as e:
raise Exception(f"Error reading file {file_path}: {str(e)}")
bedrock_runtime = boto3.client(
service_name="bedrock-runtime",
region_name=AWS_REGION
)
request_body = {
"inputPrompt": "tell me about the video",
"mediaSource": {
"base64String": read_file(VIDEO_PATH)
}
}
response = bedrock_runtime.invoke_model(
modelId=MODEL_ID,
body=json.dumps(request_body),
contentType="application/json",
accept="application/json"
)
response_body = json.loads(response('body').read())
print(json.dumps(response_body, indent=2))
Twelvelabs Marengo inserts 2.7 The model generates vector insertion from video, text, sound or input image. These insertion can be used to search for similarity, clustering and other machine learning tasks (ML). Asynchronous inference model via the Startsyncinvoke API subsoil.
For a video source, you can ask for the JSON format for Twelvelabs Marengo embedded model 2.7 using StartAsyncInvoke
API.
{
"modelId": "twelvelabs.marengo-embed-2-7-v1:0",
"modelInput": {
"inputType": "video",
"mediaSource": {
"s3Location": {
"uri": "s3://your-video-object-s3-path",
"bucketOwner": "your-video-object-s3-bucket-owner-account"
}
}
},
"outputDataConfig": {
"s3OutputDataConfig": {
"s3Uri": "s3://your-bucket-name"
}
}
}
You can get a response delivered to the S3.
{
"embedding": (0.345, -0.678, 0.901, ...),
"embeddingOption": "visual-text",
"startSec": 0.0,
"endSec": 5.0
}
If you want to get started, check out a wide range of code examples for multiple uses and a number of programming languages. If you want to know more, visit Twelvelabs Pegasus 1.2 and Twelvelabs Marengo insert 2.7 into AWS documentation.
Now available
Today, Twelvelabs are generally in Amazon in Amazon: Marengo in the US East (N. Virginia), Europe (Ireland) and Asia Pacific (Soul) and the Pegasus in the US in the West (Oregon) and Europe (Ireland) accessible between US and European regions. For future updates, see the entire Region list. If you want to learn more, visit the Amazon Bedrock website on the Amazon Bedrock page and Amazon Bedrock Pricking.
Try Twelvelabs on Amazon Bedrock Console today and send AWS Re: Post for Amazon Bedrock or through the usual AWS support contacts.
– Channels
Updated 16 July 2025 – Revised images and part of the code.