Document RunningMode
PiperOrigin-RevId: 492193299
This commit is contained in:
parent
01010fa248
commit
a430939fe4
|
@ -225,7 +225,9 @@ export class GestureRecognizer extends
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Performs gesture recognition on the provided single image and waits
|
* Performs gesture recognition on the provided single image and waits
|
||||||
* synchronously for the response.
|
* synchronously for the response. Only use this method when the
|
||||||
|
* GestureRecognizer is created with running mode `image`.
|
||||||
|
*
|
||||||
* @param image A single image to process.
|
* @param image A single image to process.
|
||||||
* @return The detected gestures.
|
* @return The detected gestures.
|
||||||
*/
|
*/
|
||||||
|
@ -235,7 +237,9 @@ export class GestureRecognizer extends
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Performs gesture recognition on the provided video frame and waits
|
* Performs gesture recognition on the provided video frame and waits
|
||||||
* synchronously for the response.
|
* synchronously for the response. Only use this method when the
|
||||||
|
* GestureRecognizer is created with running mode `video`.
|
||||||
|
*
|
||||||
* @param videoFrame A video frame to process.
|
* @param videoFrame A video frame to process.
|
||||||
* @param timestamp The timestamp of the current frame, in ms.
|
* @param timestamp The timestamp of the current frame, in ms.
|
||||||
* @return The detected gestures.
|
* @return The detected gestures.
|
||||||
|
|
|
@ -177,7 +177,9 @@ export class HandLandmarker extends VisionTaskRunner<HandLandmarkerResult> {
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Performs hand landmarks detection on the provided single image and waits
|
* Performs hand landmarks detection on the provided single image and waits
|
||||||
* synchronously for the response.
|
* synchronously for the response. Only use this method when the
|
||||||
|
* HandLandmarker is created with running mode `image`.
|
||||||
|
*
|
||||||
* @param image An image to process.
|
* @param image An image to process.
|
||||||
* @return The detected hand landmarks.
|
* @return The detected hand landmarks.
|
||||||
*/
|
*/
|
||||||
|
@ -187,7 +189,9 @@ export class HandLandmarker extends VisionTaskRunner<HandLandmarkerResult> {
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Performs hand landmarks detection on the provided video frame and waits
|
* Performs hand landmarks detection on the provided video frame and waits
|
||||||
* synchronously for the response.
|
* synchronously for the response. Only use this method when the
|
||||||
|
* HandLandmarker is created with running mode `video`.
|
||||||
|
*
|
||||||
* @param videoFrame A video frame to process.
|
* @param videoFrame A video frame to process.
|
||||||
* @param timestamp The timestamp of the current frame, in ms.
|
* @param timestamp The timestamp of the current frame, in ms.
|
||||||
* @return The detected hand landmarks.
|
* @return The detected hand landmarks.
|
||||||
|
|
|
@ -120,7 +120,8 @@ export class ImageClassifier extends VisionTaskRunner<ImageClassifierResult> {
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Performs image classification on the provided single image and waits
|
* Performs image classification on the provided single image and waits
|
||||||
* synchronously for the response.
|
* synchronously for the response. Only use this method when the
|
||||||
|
* ImageClassifier is created with running mode `image`.
|
||||||
*
|
*
|
||||||
* @param image An image to process.
|
* @param image An image to process.
|
||||||
* @return The classification result of the image
|
* @return The classification result of the image
|
||||||
|
@ -131,7 +132,8 @@ export class ImageClassifier extends VisionTaskRunner<ImageClassifierResult> {
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Performs image classification on the provided video frame and waits
|
* Performs image classification on the provided video frame and waits
|
||||||
* synchronously for the response.
|
* synchronously for the response. Only use this method when the
|
||||||
|
* ImageClassifier is created with running mode `video`.
|
||||||
*
|
*
|
||||||
* @param videoFrame A video frame to process.
|
* @param videoFrame A video frame to process.
|
||||||
* @param timestamp The timestamp of the current frame, in ms.
|
* @param timestamp The timestamp of the current frame, in ms.
|
||||||
|
|
|
@ -122,10 +122,8 @@ export class ImageEmbedder extends VisionTaskRunner<ImageEmbedderResult> {
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Performs embedding extraction on the provided single image and waits
|
* Performs embedding extraction on the provided single image and waits
|
||||||
* synchronously for the response.
|
* synchronously for the response. Only use this method when the
|
||||||
*
|
* ImageEmbedder is created with running mode `image`.
|
||||||
* Only use this method when the `useStreamMode` option is not set or
|
|
||||||
* expliclity set to `false`.
|
|
||||||
*
|
*
|
||||||
* @param image The image to process.
|
* @param image The image to process.
|
||||||
* @return The classification result of the image
|
* @return The classification result of the image
|
||||||
|
@ -136,9 +134,8 @@ export class ImageEmbedder extends VisionTaskRunner<ImageEmbedderResult> {
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Performs embedding extraction on the provided video frame and waits
|
* Performs embedding extraction on the provided video frame and waits
|
||||||
* synchronously for the response.
|
* synchronously for the response. Only use this method when the
|
||||||
*
|
* ImageEmbedder is created with running mode `video`.
|
||||||
* Only use this method when the `useStreamMode` option is set to `true`.
|
|
||||||
*
|
*
|
||||||
* @param imageFrame The image frame to process.
|
* @param imageFrame The image frame to process.
|
||||||
* @param timestamp The timestamp of the current frame, in ms.
|
* @param timestamp The timestamp of the current frame, in ms.
|
||||||
|
|
|
@ -151,7 +151,9 @@ export class ObjectDetector extends VisionTaskRunner<Detection[]> {
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Performs object detection on the provided single image and waits
|
* Performs object detection on the provided single image and waits
|
||||||
* synchronously for the response.
|
* synchronously for the response. Only use this method when the
|
||||||
|
* ObjectDetector is created with running mode `image`.
|
||||||
|
*
|
||||||
* @param image An image to process.
|
* @param image An image to process.
|
||||||
* @return The list of detected objects
|
* @return The list of detected objects
|
||||||
*/
|
*/
|
||||||
|
@ -161,7 +163,9 @@ export class ObjectDetector extends VisionTaskRunner<Detection[]> {
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Performs object detection on the provided vidoe frame and waits
|
* Performs object detection on the provided vidoe frame and waits
|
||||||
* synchronously for the response.
|
* synchronously for the response. Only use this method when the
|
||||||
|
* ObjectDetector is created with running mode `video`.
|
||||||
|
*
|
||||||
* @param videoFrame A video frame to process.
|
* @param videoFrame A video frame to process.
|
||||||
* @param timestamp The timestamp of the current frame, in ms.
|
* @param timestamp The timestamp of the current frame, in ms.
|
||||||
* @return The list of detected objects
|
* @return The list of detected objects
|
||||||
|
|
Loading…
Reference in New Issue
Block a user