Update links in README.md

PiperOrigin-RevId: 532506851
This commit is contained in:
Sebastian Schmidt 2023-05-16 10:42:50 -07:00 committed by Copybara-Service
parent 8bf6c63e92
commit d53fbf2aeb
3 changed files with 80 additions and 48 deletions

View File

@ -2,30 +2,32 @@
This package contains the audio tasks for MediaPipe. This package contains the audio tasks for MediaPipe.
## Audio Classification ## Audio Classifier
The MediaPipe Audio Classification task performs classification on audio data. The MediaPipe Audio Classifier task performs classification on audio data.
``` ```
const audio = await FilesetResolver.forAudioTasks( const audio = await FilesetResolver.forAudioTasks(
"https://cdn.jsdelivr.net/npm/@mediapipe/tasks-audio@latest/wasm" "https://cdn.jsdelivr.net/npm/@mediapipe/tasks-audio/wasm"
); );
const audioClassifier = await AudioClassifier.createFromModelPath(audio, const audioClassifier = await AudioClassifier.createFromModelPath(audio,
"https://storage.googleapis.com/mediapipe-tasks/audio_classifier/yamnet_audio_classifier_with_metadata.tflite" "https://storage.googleapis.com/mediapipe-models/audio_classifier/yamnet/float32/1/yamnet.tflite
); );
const classifications = audioClassifier.classify(audioData); const classifications = audioClassifier.classify(audioData);
``` ```
For more information, refer to the [Audio Classifier](https://developers.google.com/mediapipe/solutions/audio/audio_classifier/web_js) documentation.
## Audio Embedding ## Audio Embedding
The MediaPipe Audio Embedding task extracts embeddings from audio data. The MediaPipe Audio Embedding task extracts embeddings from audio data.
``` ```
const audio = await FilesetResolver.forAudioTasks( const audio = await FilesetResolver.forAudioTasks(
"https://cdn.jsdelivr.net/npm/@mediapipe/tasks-audio@latest/wasm" "https://cdn.jsdelivr.net/npm/@mediapipe/tasks-audio/wasm"
); );
const audioEmbedder = await AudioEmbedder.createFromModelPath(audio, const audioEmbedder = await AudioEmbedder.createFromModelPath(audio,
"model.tflite" "https://storage.googleapis.com/mediapipe-assets/yamnet_embedding_metadata.tflite?generation=1668295071595506"
); );
const embeddings = audioEmbedder.embed(audioData); const embeddings = audioEmbedder.embed(audioData);
``` ```

View File

@ -2,47 +2,51 @@
This package contains the text tasks for MediaPipe. This package contains the text tasks for MediaPipe.
## Language Detection ## Language Detector
The MediaPipe Language Detector task predicts the language of an input text. The MediaPipe Language Detector task predicts the language of an input text.
``` ```
const text = await FilesetResolver.forTextTasks( const text = await FilesetResolver.forTextTasks(
"https://cdn.jsdelivr.net/npm/@mediapipe/tasks-text@latest/wasm" "https://cdn.jsdelivr.net/npm/@mediapipe/tasks-text/wasm"
); );
const languageDetector = await LanguageDetector.createFromModelPath(text, const languageDetector = await LanguageDetector.createFromModelPath(text,
"model.tflite" "https://storage.googleapis.com/mediapipe-models/language_detector/language_detector/float32/1/language_detector.tflite
); );
const result = languageDetector.detect(textData); const result = languageDetector.detect(textData);
``` ```
## Text Classification For more information, refer to the [Language Detector](https://developers.google.com/mediapipe/solutions/text/language_detector/web_js) documentation.
## Text Classifier
The MediaPipe Text Classifier task lets you classify text into a set of defined The MediaPipe Text Classifier task lets you classify text into a set of defined
categories, such as positive or negative sentiment. categories, such as positive or negative sentiment.
``` ```
const text = await FilesetResolver.forTextTasks( const text = await FilesetResolver.forTextTasks(
"https://cdn.jsdelivr.net/npm/@mediapipe/tasks-text@latest/wasm" "https://cdn.jsdelivr.net/npm/@mediapipe/tasks-text/wasm"
); );
const textClassifier = await TextClassifier.createFromModelPath(text, const textClassifier = await TextClassifier.createFromModelPath(text,
"https://storage.googleapis.com/mediapipe-tasks/text_classifier/bert_text_classifier.tflite" "https://storage.googleapis.com/mediapipe-models/text_classifier/bert_classifier/float32/1/bert_classifier.tflite"
); );
const classifications = textClassifier.classify(textData); const classifications = textClassifier.classify(textData);
``` ```
For more information, refer to the [Text Classification](https://developers.google.com/mediapipe/solutions/text/text_classifier/web_js) documentation. For more information, refer to the [Text Classification](https://developers.google.com/mediapipe/solutions/text/text_classifier/web_js) documentation.
## Text Embedding ## Text Embedder
The MediaPipe Text Embedding task extracts embeddings from text data. The MediaPipe Text Embedder task extracts embeddings from text data.
``` ```
const text = await FilesetResolver.forTextTasks( const text = await FilesetResolver.forTextTasks(
"https://cdn.jsdelivr.net/npm/@mediapipe/tasks-text@latest/wasm" "https://cdn.jsdelivr.net/npm/@mediapipe/tasks-text/wasm"
); );
const textEmbedder = await TextEmbedder.createFromModelPath(text, const textEmbedder = await TextEmbedder.createFromModelPath(text,
"https://storage.googleapis.com/mediapipe-tasks/text_embedder/mobilebert_embedding_with_metadata.tflite" "https://storage.googleapis.com/mediapipe-models/text_embedder/universal_sentence_encoder/float32/1/universal_sentence_encoder.tflite"
); );
const embeddings = textEmbedder.embed(textData); const embeddings = textEmbedder.embed(textData);
``` ```
For more information, refer to the [Text Embedder](https://developers.google.com/mediapipe/solutions/text/text_embedder/web_js) documentation.

View File

@ -2,23 +2,25 @@
This package contains the vision tasks for MediaPipe. This package contains the vision tasks for MediaPipe.
## Face Detection ## Face Detector
The MediaPipe Face Detector task lets you detect the presence and location of The MediaPipe Face Detector task lets you detect the presence and location of
faces within images or videos. faces within images or videos.
``` ```
const vision = await FilesetResolver.forVisionTasks( const vision = await FilesetResolver.forVisionTasks(
"https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision@latest/wasm" "https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision/wasm"
); );
const faceDetector = await FaceDetector.createFromModelPath(vision, const faceDetector = await FaceDetector.createFromModelPath(vision,
"https://storage.googleapis.com/mediapipe-tasks/face_detector/face_detection_short_range.tflite" "https://storage.googleapis.com/mediapipe-models/face_detector/blaze_face_short_range/float16/1/blaze_face_short_range.tflite"
); );
const image = document.getElementById("image") as HTMLImageElement; const image = document.getElementById("image") as HTMLImageElement;
const detections = faceDetector.detect(image); const detections = faceDetector.detect(image);
``` ```
## Face Landmark Detection For more information, refer to the [Face Detector](https://developers.google.com/mediapipe/solutions/vision/face_detector/web_js) documentation.
## Face Landmarker
The MediaPipe Face Landmarker task lets you detect the landmarks of faces in The MediaPipe Face Landmarker task lets you detect the landmarks of faces in
an image. You can use this Task to localize key points of a face and render an image. You can use this Task to localize key points of a face and render
@ -26,31 +28,33 @@ visual effects over the faces.
``` ```
const vision = await FilesetResolver.forVisionTasks( const vision = await FilesetResolver.forVisionTasks(
"https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision@latest/wasm" "https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision/wasm"
); );
const faceLandmarker = await FaceLandmarker.createFromModelPath(vision, const faceLandmarker = await FaceLandmarker.createFromModelPath(vision,
"https://storage.googleapis.com/mediapipe-tasks/face_landmarker/face_landmarker.task" "https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task`"
); );
const image = document.getElementById("image") as HTMLImageElement; const image = document.getElementById("image") as HTMLImageElement;
const landmarks = faceLandmarker.detect(image); const landmarks = faceLandmarker.detect(image);
``` ```
For more information, refer to the [Face Landmarker](https://developers.google.com/mediapipe/solutions/vision/face_landmarker/web_js) documentation.
## Face Stylizer ## Face Stylizer
The MediaPipe Face Stylizer lets you perform face stylization on images. The MediaPipe Face Stylizer lets you perform face stylization on images.
``` ```
const vision = await FilesetResolver.forVisionTasks( const vision = await FilesetResolver.forVisionTasks(
"https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision@latest/wasm" "https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision/wasm"
); );
const faceStylizer = await FaceStylizer.createFromModelPath(vision, const faceStylizer = await FaceStylizer.createFromModelPath(vision,
"https://storage.googleapis.com/mediapipe-tasks/face_stylizer/face_stylizer_with_metadata.tflite" "https://storage.googleapis.com/mediapipe-models/face_stylizer/blaze_face_stylizer/float32/1/blaze_face_stylizer.task"
); );
const image = document.getElementById("image") as HTMLImageElement; const image = document.getElementById("image") as HTMLImageElement;
const stylizedImage = faceStylizer.stylize(image); const stylizedImage = faceStylizer.stylize(image);
``` ```
## Gesture Recognition ## Gesture Recognizer
The MediaPipe Gesture Recognizer task lets you recognize hand gestures in real The MediaPipe Gesture Recognizer task lets you recognize hand gestures in real
time, and provides the recognized hand gesture results along with the landmarks time, and provides the recognized hand gesture results along with the landmarks
@ -59,16 +63,18 @@ from a user, and invoke application features that correspond to those gestures.
``` ```
const vision = await FilesetResolver.forVisionTasks( const vision = await FilesetResolver.forVisionTasks(
"https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision@latest/wasm" "https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision/wasm"
); );
const gestureRecognizer = await GestureRecognizer.createFromModelPath(vision, const gestureRecognizer = await GestureRecognizer.createFromModelPath(vision,
"https://storage.googleapis.com/mediapipe-tasks/gesture_recognizer/gesture_recognizer.task" "hhttps://storage.googleapis.com/mediapipe-models/gesture_recognizer/gesture_recognizer/float16/1/gesture_recognizer.task"
); );
const image = document.getElementById("image") as HTMLImageElement; const image = document.getElementById("image") as HTMLImageElement;
const recognitions = gestureRecognizer.recognize(image); const recognitions = gestureRecognizer.recognize(image);
``` ```
## Hand Landmark Detection For more information, refer to the [Gesture Recognizer](https://developers.google.com/mediapipe/solutions/vision/gesture_recognizer/web_js) documentation.
## Hand Landmarker
The MediaPipe Hand Landmarker task lets you detect the landmarks of the hands in The MediaPipe Hand Landmarker task lets you detect the landmarks of the hands in
an image. You can use this Task to localize key points of the hands and render an image. You can use this Task to localize key points of the hands and render
@ -76,18 +82,18 @@ visual effects over the hands.
``` ```
const vision = await FilesetResolver.forVisionTasks( const vision = await FilesetResolver.forVisionTasks(
"https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision@latest/wasm" "https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision/wasm"
); );
const handLandmarker = await HandLandmarker.createFromModelPath(vision, const handLandmarker = await HandLandmarker.createFromModelPath(vision,
"https://storage.googleapis.com/mediapipe-tasks/hand_landmarker/hand_landmarker.task" "https://storage.googleapis.com/mediapipe-models/hand_landmarker/hand_landmarker/float16/1/hand_landmarker.task"
); );
const image = document.getElementById("image") as HTMLImageElement; const image = document.getElementById("image") as HTMLImageElement;
const landmarks = handLandmarker.detect(image); const landmarks = handLandmarker.detect(image);
``` ```
For more information, refer to the [Handlandmark Detection](https://developers.google.com/mediapipe/solutions/vision/hand_landmarker/web_js) documentation. For more information, refer to the [Hand Landmarker](https://developers.google.com/mediapipe/solutions/vision/hand_landmarker/web_js) documentation.
## Image Classification ## Image Classifier
The MediaPipe Image Classifier task lets you perform classification on images. The MediaPipe Image Classifier task lets you perform classification on images.
You can use this task to identify what an image represents among a set of You can use this task to identify what an image represents among a set of
@ -95,27 +101,42 @@ categories defined at training time.
``` ```
const vision = await FilesetResolver.forVisionTasks( const vision = await FilesetResolver.forVisionTasks(
"https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision@latest/wasm" "https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision/wasm"
); );
const imageClassifier = await ImageClassifier.createFromModelPath(vision, const imageClassifier = await ImageClassifier.createFromModelPath(vision,
"https://storage.googleapis.com/mediapipe-tasks/image_classifier/efficientnet_lite0_uint8.tflite" "https://storage.googleapis.com/mediapipe-models/image_classifier/efficientnet_lite0/float32/1/efficientnet_lite0.tflite"
); );
const image = document.getElementById("image") as HTMLImageElement; const image = document.getElementById("image") as HTMLImageElement;
const classifications = imageClassifier.classify(image); const classifications = imageClassifier.classify(image);
``` ```
For more information, refer to the [Image Classification](https://developers.google.com/mediapipe/solutions/vision/image_classifier/web_js) documentation. For more information, refer to the [Image Classifier](https://developers.google.com/mediapipe/solutions/vision/image_classifier/web_js) documentation.
## Image Segmentation ## Image Embedder
The MediaPipe Image Embedder extracts embeddings from an image.
```
const vision = await FilesetResolver.forVisionTasks(
"https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision/wasm"
);
const imageEmbedder = await ImageEmbedder.createFromModelPath(vision,
"https://storage.googleapis.com/mediapipe-models/image_embedder/mobilenet_v3_small/float32/1/mobilenet_v3_small.tflite"
);
const image = document.getElementById("image") as HTMLImageElement;
const embeddings = imageSegmenter.embed(image);
```
## Image Segmenter
The MediaPipe Image Segmenter lets you segment an image into categories. The MediaPipe Image Segmenter lets you segment an image into categories.
``` ```
const vision = await FilesetResolver.forVisionTasks( const vision = await FilesetResolver.forVisionTasks(
"https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision@latest/wasm" "https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision/wasm"
); );
const imageSegmenter = await ImageSegmenter.createFromModelPath(vision, const imageSegmenter = await ImageSegmenter.createFromModelPath(vision,
"https://storage.googleapis.com/mediapipe-tasks/image_segmenter/selfie_segmentation.tflite" "https://storage.googleapis.com/mediapipe-models/image_segmenter/deeplab_v3/float32/1/deeplab_v3.tflite"
); );
const image = document.getElementById("image") as HTMLImageElement; const image = document.getElementById("image") as HTMLImageElement;
imageSegmenter.segment(image, (masks, width, height) => { imageSegmenter.segment(image, (masks, width, height) => {
@ -123,18 +144,20 @@ imageSegmenter.segment(image, (masks, width, height) => {
}); });
``` ```
## Interactive Segmentation For more information, refer to the [Image Segmenter](https://developers.google.com/mediapipe/solutions/vision/image_segmenter/web_js) documentation.
## Interactive Segmenter
The MediaPipe Interactive Segmenter lets you select a region of interest to The MediaPipe Interactive Segmenter lets you select a region of interest to
segment an image by. segment an image by.
``` ```
const vision = await FilesetResolver.forVisionTasks( const vision = await FilesetResolver.forVisionTasks(
"https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision@latest/wasm" "https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision/wasm"
); );
const interactiveSegmenter = await InteractiveSegmenter.createFromModelPath( const interactiveSegmenter = await InteractiveSegmenter.createFromModelPath(
vision, vision,
"https://storage.googleapis.com/mediapipe-tasks/interactive_segmenter/ptm_512_hdt_ptm_woid.tflite "https://storage.googleapis.com/mediapipe-models/interactive_segmenter/magic_touch/float32/1/magic_touch.tflite"
); );
const image = document.getElementById("image") as HTMLImageElement; const image = document.getElementById("image") as HTMLImageElement;
interactiveSegmenter.segment(image, { keypoint: { x: 0.1, y: 0.2 } }, interactiveSegmenter.segment(image, { keypoint: { x: 0.1, y: 0.2 } },
@ -142,17 +165,19 @@ interactiveSegmenter.segment(image, { keypoint: { x: 0.1, y: 0.2 } },
); );
``` ```
## Object Detection For more information, refer to the [Interactive Segmenter](https://developers.google.com/mediapipe/solutions/vision/interactive_segmenter/web_js) documentation.
## Object Detector
The MediaPipe Object Detector task lets you detect the presence and location of The MediaPipe Object Detector task lets you detect the presence and location of
multiple classes of objects within images or videos. multiple classes of objects within images or videos.
``` ```
const vision = await FilesetResolver.forVisionTasks( const vision = await FilesetResolver.forVisionTasks(
"https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision@latest/wasm" "https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision/wasm"
); );
const objectDetector = await ObjectDetector.createFromModelPath(vision, const objectDetector = await ObjectDetector.createFromModelPath(vision,
"https://storage.googleapis.com/mediapipe-tasks/object_detector/efficientdet_lite0_uint8.tflite" "https://storage.googleapis.com/mediapipe-models/object_detector/efficientdet_lite0/float16/1/efficientdet_lite0.tflite"
); );
const image = document.getElementById("image") as HTMLImageElement; const image = document.getElementById("image") as HTMLImageElement;
const detections = objectDetector.detect(image); const detections = objectDetector.detect(image);
@ -160,8 +185,7 @@ const detections = objectDetector.detect(image);
For more information, refer to the [Object Detector](https://developers.google.com/mediapipe/solutions/vision/object_detector/web_js) documentation. For more information, refer to the [Object Detector](https://developers.google.com/mediapipe/solutions/vision/object_detector/web_js) documentation.
## Pose Landmarker
## Pose Landmark Detection
The MediaPipe Pose Landmarker task lets you detect the landmarks of body poses The MediaPipe Pose Landmarker task lets you detect the landmarks of body poses
in an image. You can use this Task to localize key points of a pose and render in an image. You can use this Task to localize key points of a pose and render
@ -169,11 +193,13 @@ visual effects over the body.
``` ```
const vision = await FilesetResolver.forVisionTasks( const vision = await FilesetResolver.forVisionTasks(
"https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision@latest/wasm" "https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision/wasm"
); );
const poseLandmarker = await PoseLandmarker.createFromModelPath(vision, const poseLandmarker = await PoseLandmarker.createFromModelPath(vision,
"model.task" "https://storage.googleapis.com/mediapipe-models/pose_landmarker/pose_landmarker_lite/float16/1/pose_landmarker_lite.task
); );
const image = document.getElementById("image") as HTMLImageElement; const image = document.getElementById("image") as HTMLImageElement;
const landmarks = poseLandmarker.detect(image); const landmarks = poseLandmarker.detect(image);
``` ```
For more information, refer to the [Pose Landmarker](https://developers.google.com/mediapipe/solutions/vision/pose_landmarker/web_js) documentation.