[−][src]Crate rusoto_rekognition
This is the Amazon Rekognition API reference.
If you're using the service, you're probably looking for RekognitionClient and Rekognition.
Structs
AgeRange | Structure containing the estimated age range, in years, for a face. Amazon Rekognition estimates an age range for faces detected in the input image. Estimated age ranges can overlap. A face of a 5-year-old might have an estimated range of 4-6, while the face of a 6-year-old might have an estimated range of 4-8. |
Beard | Indicates whether or not the face has a beard, and the confidence level in the determination. |
BoundingBox | Identifies the bounding box around the label, face, or text. The The The The bounding box coordinates can have negative values. For example, if Amazon Rekognition is able to detect a face that is at the image edge and is only partially visible, the service can return coordinates that are outside the image bounds and, depending on the image edge, you might get negative values or values greater than 1 for the |
Celebrity | Provides information about a celebrity recognized by the RecognizeCelebrities operation. |
CelebrityDetail | Information about a recognized celebrity. |
CelebrityRecognition | Information about a detected celebrity and the time the celebrity was detected in a stored video. For more information, see GetCelebrityRecognition in the Amazon Rekognition Developer Guide. |
CompareFacesMatch | Provides information about a face in a target image that matches the source image face analyzed by |
CompareFacesRequest | |
CompareFacesResponse | |
ComparedFace | Provides face metadata for target image faces that are analyzed by |
ComparedSourceImageFace | Type that describes the face Amazon Rekognition chose to compare with the faces in the target. This contains a bounding box for the selected face and confidence level that the bounding box contains a face. Note that Amazon Rekognition selects the largest face in the source image for this comparison. |
ContentModerationDetection | Information about a moderation label detection in a stored video. |
CreateCollectionRequest | |
CreateCollectionResponse | |
CreateStreamProcessorRequest | |
CreateStreamProcessorResponse | |
DeleteCollectionRequest | |
DeleteCollectionResponse | |
DeleteFacesRequest | |
DeleteFacesResponse | |
DeleteStreamProcessorRequest | |
DeleteStreamProcessorResponse | |
DescribeCollectionRequest | |
DescribeCollectionResponse | |
DescribeStreamProcessorRequest | |
DescribeStreamProcessorResponse | |
DetectFacesRequest | |
DetectFacesResponse | |
DetectLabelsRequest | |
DetectLabelsResponse | |
DetectModerationLabelsRequest | |
DetectModerationLabelsResponse | |
DetectTextRequest | |
DetectTextResponse | |
Emotion | The emotions detected on the face, and the confidence level in the determination. For example, HAPPY, SAD, and ANGRY. |
EyeOpen | Indicates whether or not the eyes on the face are open, and the confidence level in the determination. |
Eyeglasses | Indicates whether or not the face is wearing eye glasses, and the confidence level in the determination. |
Face | Describes the face properties such as the bounding box, face ID, image ID of the input image, and external image ID that you assigned. |
FaceDetail | Structure containing attributes of the face that the algorithm detected. A GetFaceDetection is the only Amazon Rekognition Video stored video operation that can return a
The Amazon Rekognition Image DetectFaces and IndexFaces operations can return all facial attributes. To specify which attributes to return, use the |
FaceDetection | Information about a face detected in a video analysis request and the time the face was detected in the video. |
FaceMatch | Provides face metadata. In addition, it also provides the confidence in the match of this face with the input face. |
FaceRecord | Object containing both the face metadata (stored in the backend database), and facial attributes that are detected but aren't stored in the database. |
FaceSearchSettings | Input face recognition parameters for an Amazon Rekognition stream processor. |
Gender | Gender of the face and the confidence level in the determination. |
Geometry | Information about where the text detected by DetectText is located on an image. |
GetCelebrityInfoRequest | |
GetCelebrityInfoResponse | |
GetCelebrityRecognitionRequest | |
GetCelebrityRecognitionResponse | |
GetContentModerationRequest | |
GetContentModerationResponse | |
GetFaceDetectionRequest | |
GetFaceDetectionResponse | |
GetFaceSearchRequest | |
GetFaceSearchResponse | |
GetLabelDetectionRequest | |
GetLabelDetectionResponse | |
GetPersonTrackingRequest | |
GetPersonTrackingResponse | |
Image | Provides the input image either as bytes or an S3 object. You pass image bytes to an Amazon Rekognition API operation by using the For more information, see Analyzing an Image Loaded from a Local File System in the Amazon Rekognition Developer Guide. You pass images stored in an S3 bucket to an Amazon Rekognition API operation by using the The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes using the Bytes property is not supported. You must first upload the image to an Amazon S3 bucket and then call the operation using the S3Object property. For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see Resource Based Policies in the Amazon Rekognition Developer Guide. |
ImageQuality | Identifies face image brightness and sharpness. |
IndexFacesRequest | |
IndexFacesResponse | |
Instance | An instance of a label returned by Amazon Rekognition Image (DetectLabels) or by Amazon Rekognition Video (GetLabelDetection). |
KinesisDataStream | The Kinesis data stream Amazon Rekognition to which the analysis results of a Amazon Rekognition stream processor are streamed. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide. |
KinesisVideoStream | Kinesis video stream stream that provides the source streaming video for a Amazon Rekognition Video stream processor. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide. |
Label | Structure containing details about the detected label, including the name, detected instances, parent labels, and level of confidence.
|
LabelDetection | Information about a label detected in a video analysis request and the time the label was detected in the video. |
Landmark | Indicates the location of the landmark on the face. |
ListCollectionsRequest | |
ListCollectionsResponse | |
ListFacesRequest | |
ListFacesResponse | |
ListStreamProcessorsRequest | |
ListStreamProcessorsResponse | |
ModerationLabel | Provides information about a single type of moderated content found in an image or video. Each type of moderated content has a label within a hierarchical taxonomy. For more information, see Detecting Unsafe Content in the Amazon Rekognition Developer Guide. |
MouthOpen | Indicates whether or not the mouth on the face is open, and the confidence level in the determination. |
Mustache | Indicates whether or not the face has a mustache, and the confidence level in the determination. |
NotificationChannel | The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the completion status of a video analysis operation. For more information, see api-video. |
Parent | A parent label for a label. A label can have 0, 1, or more parents. |
PersonDetail | Details about a person detected in a video analysis request. |
PersonDetection | Details and path tracking information for a single time a person's path is tracked in a video. Amazon Rekognition operations that track people's paths return an array of For more information, see GetPersonTracking in the Amazon Rekognition Developer Guide. |
PersonMatch | Information about a person whose face matches a face(s) in an Amazon Rekognition collection. Includes information about the faces in the Amazon Rekognition collection (FaceMatch), information about the person (PersonDetail), and the time stamp for when the person was detected in a video. An array of |
Point | The X and Y coordinates of a point on an image. The X and Y values returned are ratios of the overall image size. For example, if the input image is 700x200 and the operation returns X=0.5 and Y=0.25, then the point is at the (350,50) pixel coordinate on the image. An array of |
Pose | Indicates the pose of the face as determined by its pitch, roll, and yaw. |
RecognizeCelebritiesRequest | |
RecognizeCelebritiesResponse | |
RekognitionClient | A client for the Amazon Rekognition API. |
S3Object | Provides the S3 bucket name and object name. The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations. For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see Resource-Based Policies in the Amazon Rekognition Developer Guide. |
SearchFacesByImageRequest | |
SearchFacesByImageResponse | |
SearchFacesRequest | |
SearchFacesResponse | |
Smile | Indicates whether or not the face is smiling, and the confidence level in the determination. |
StartCelebrityRecognitionRequest | |
StartCelebrityRecognitionResponse | |
StartContentModerationRequest | |
StartContentModerationResponse | |
StartFaceDetectionRequest | |
StartFaceDetectionResponse | |
StartFaceSearchRequest | |
StartFaceSearchResponse | |
StartLabelDetectionRequest | |
StartLabelDetectionResponse | |
StartPersonTrackingRequest | |
StartPersonTrackingResponse | |
StartStreamProcessorRequest | |
StartStreamProcessorResponse | |
StopStreamProcessorRequest | |
StopStreamProcessorResponse | |
StreamProcessor | An object that recognizes faces in a streaming video. An Amazon Rekognition stream processor is created by a call to CreateStreamProcessor. The request parameters for |
StreamProcessorInput | Information about the source streaming video. |
StreamProcessorOutput | Information about the Amazon Kinesis Data Streams stream to which a Amazon Rekognition Video stream processor streams the results of a video analysis. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide. |
StreamProcessorSettings | Input parameters used to recognize faces in a streaming video analyzed by a Amazon Rekognition stream processor. |
Sunglasses | Indicates whether or not the face is wearing sunglasses, and the confidence level in the determination. |
TextDetection | Information about a word or line of text detected by DetectText. The Every word and line has an identifier ( For more information, see Detecting Text in the Amazon Rekognition Developer Guide. |
UnindexedFace | A face that IndexFaces detected, but didn't index. Use the |
Video | Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as StartLabelDetection use |
VideoMetadata | Information about a video that Amazon Rekognition analyzed. |
Enums
CompareFacesError | Errors returned by CompareFaces |
CreateCollectionError | Errors returned by CreateCollection |
CreateStreamProcessorError | Errors returned by CreateStreamProcessor |
DeleteCollectionError | Errors returned by DeleteCollection |
DeleteFacesError | Errors returned by DeleteFaces |
DeleteStreamProcessorError | Errors returned by DeleteStreamProcessor |
DescribeCollectionError | Errors returned by DescribeCollection |
DescribeStreamProcessorError | Errors returned by DescribeStreamProcessor |
DetectFacesError | Errors returned by DetectFaces |
DetectLabelsError | Errors returned by DetectLabels |
DetectModerationLabelsError | Errors returned by DetectModerationLabels |
DetectTextError | Errors returned by DetectText |
GetCelebrityInfoError | Errors returned by GetCelebrityInfo |
GetCelebrityRecognitionError | Errors returned by GetCelebrityRecognition |
GetContentModerationError | Errors returned by GetContentModeration |
GetFaceDetectionError | Errors returned by GetFaceDetection |
GetFaceSearchError | Errors returned by GetFaceSearch |
GetLabelDetectionError | Errors returned by GetLabelDetection |
GetPersonTrackingError | Errors returned by GetPersonTracking |
IndexFacesError | Errors returned by IndexFaces |
ListCollectionsError | Errors returned by ListCollections |
ListFacesError | Errors returned by ListFaces |
ListStreamProcessorsError | Errors returned by ListStreamProcessors |
RecognizeCelebritiesError | Errors returned by RecognizeCelebrities |
SearchFacesByImageError | Errors returned by SearchFacesByImage |
SearchFacesError | Errors returned by SearchFaces |
StartCelebrityRecognitionError | Errors returned by StartCelebrityRecognition |
StartContentModerationError | Errors returned by StartContentModeration |
StartFaceDetectionError | Errors returned by StartFaceDetection |
StartFaceSearchError | Errors returned by StartFaceSearch |
StartLabelDetectionError | Errors returned by StartLabelDetection |
StartPersonTrackingError | Errors returned by StartPersonTracking |
StartStreamProcessorError | Errors returned by StartStreamProcessor |
StopStreamProcessorError | Errors returned by StopStreamProcessor |
Traits
Rekognition | Trait representing the capabilities of the Amazon Rekognition API. Amazon Rekognition clients implement this trait. |