Project import generated by Copybara.

GitOrigin-RevId: aaca5c37abcf8b7a6c3c28804739afdbad46e704
This commit is contained in:
MediaPipe Team 2020-08-13 15:02:55 -04:00 committed by chuoling
parent 73f4475c17
commit a7225b938a
4 changed files with 19 additions and 18 deletions

View File

@ -88,7 +88,7 @@ run code search using
## Publications ## Publications
* [BlazePose - On-device Real-time Body Pose Tracking](https://mediapipe.page.link/blazepose-blog) * [BlazePose - On-device Real-time Body Pose Tracking](https://ai.googleblog.com/2020/08/on-device-real-time-body-pose-tracking.html)
in Google AI Blog in Google AI Blog
* [MediaPipe Iris: Real-time Eye Tracking and Depth Estimation](https://ai.googleblog.com/2020/08/mediapipe-iris-real-time-iris-tracking.html) * [MediaPipe Iris: Real-time Eye Tracking and Depth Estimation](https://ai.googleblog.com/2020/08/mediapipe-iris-real-time-iris-tracking.html)
in Google AI Blog in Google AI Blog

View File

@ -88,7 +88,7 @@ run code search using
## Publications ## Publications
* [BlazePose - On-device Real-time Body Pose Tracking](https://mediapipe.page.link/blazepose-blog) * [BlazePose - On-device Real-time Body Pose Tracking](https://ai.googleblog.com/2020/08/on-device-real-time-body-pose-tracking.html)
in Google AI Blog in Google AI Blog
* [MediaPipe Iris: Real-time Eye Tracking and Depth Estimation](https://ai.googleblog.com/2020/08/mediapipe-iris-real-time-iris-tracking.html) * [MediaPipe Iris: Real-time Eye Tracking and Depth Estimation](https://ai.googleblog.com/2020/08/mediapipe-iris-real-time-iris-tracking.html)
in Google AI Blog in Google AI Blog

View File

@ -5,7 +5,7 @@ parent: Solutions
nav_order: 5 nav_order: 5
--- ---
# MediaPipe Pose # MediaPipe BlazePose
{: .no_toc } {: .no_toc }
1. TOC 1. TOC
@ -22,12 +22,13 @@ on top of the physical world in augmented reality.
MediaPipe Pose is a ML solution for high-fidelity upper-body pose tracking, MediaPipe Pose is a ML solution for high-fidelity upper-body pose tracking,
inferring 25 2D upper-body landmarks from RGB video frames utilizing our inferring 25 2D upper-body landmarks from RGB video frames utilizing our
[BlazePose](https://mediapipe.page.link/blazepose-blog) research. Current [BlazePose](https://ai.googleblog.com/2020/08/on-device-real-time-body-pose-tracking.html)
state-of-the-art approaches rely primarily on powerful desktop environments for research. Current state-of-the-art approaches rely primarily on powerful desktop
inference, whereas our method achieves real-time performance on most modern environments for inference, whereas our method achieves real-time performance on
[mobile phones](#mobile), [desktops/laptops](#desktop), in [python](#python) and most modern [mobile phones](#mobile), [desktops/laptops](#desktop), in
even on the [web](#web). A variant of MediaPipe Pose that performs full-body [python](#python) and even on the [web](#web). A variant of MediaPipe Pose that
pose tracking on mobile phones will be included in an upcoming release of performs full-body pose tracking on mobile phones will be included in an
upcoming release of
[ML Kit](https://developers.google.com/ml-kit/early-access/pose-detection). [ML Kit](https://developers.google.com/ml-kit/early-access/pose-detection).
![pose_tracking_upper_body_example.gif](../images/mobile/pose_tracking_upper_body_example.gif) | ![pose_tracking_upper_body_example.gif](../images/mobile/pose_tracking_upper_body_example.gif) |
@ -91,8 +92,8 @@ The landmark model currently included in MediaPipe Pose predicts the location of
(x, y location and visibility), plus two virtual alignment keypoints. It shares (x, y location and visibility), plus two virtual alignment keypoints. It shares
the same architecture as the full-body version that predicts 33 landmarks, the same architecture as the full-body version that predicts 33 landmarks,
described in more detail in the described in more detail in the
[BlazePose Google AI Blog](https://mediapipe.page.link/blazepose-blog) and in [BlazePose Google AI Blog](https://ai.googleblog.com/2020/08/on-device-real-time-body-pose-tracking.html)
this [paper](https://arxiv.org/abs/2006.10204). and in this [paper](https://arxiv.org/abs/2006.10204).
![pose_tracking_upper_body_landmarks.png](../images/mobile/pose_tracking_upper_body_landmarks.png) | ![pose_tracking_upper_body_landmarks.png](../images/mobile/pose_tracking_upper_body_landmarks.png) |
:------------------------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------------: |
@ -168,7 +169,7 @@ Please refer to [these instructions](../index.md#mediapipe-on-the-web).
## Resources ## Resources
* Google AI Blog: * Google AI Blog:
[BlazePose - On-device Real-time Body Pose Tracking](https://mediapipe.page.link/blazepose-blog) [BlazePose - On-device Real-time Body Pose Tracking](https://ai.googleblog.com/2020/08/on-device-real-time-body-pose-tracking.html)
* Paper: * Paper:
[BlazePose: On-device Real-time Body Pose Tracking](https://arxiv.org/abs/2006.10204) [BlazePose: On-device Real-time Body Pose Tracking](https://arxiv.org/abs/2006.10204)
([presentation](https://youtu.be/YPpUOTRn5tA)) ([presentation](https://youtu.be/YPpUOTRn5tA))

View File

@ -5,7 +5,7 @@
* For back-facing cameras: [TFLite model](https://github.com/google/mediapipe/tree/master/mediapipe/models/face_detection_back.tflite) * For back-facing cameras: [TFLite model](https://github.com/google/mediapipe/tree/master/mediapipe/models/face_detection_back.tflite)
* [Model page](https://sites.google.com/corp/view/perception-cv4arvr/blazeface) * [Model page](https://sites.google.com/corp/view/perception-cv4arvr/blazeface)
* Paper: ["BlazeFace: Sub-millisecond Neural Face Detection on Mobile GPUs"](https://arxiv.org/abs/1907.05047) * Paper: ["BlazeFace: Sub-millisecond Neural Face Detection on Mobile GPUs"](https://arxiv.org/abs/1907.05047)
* [Model card](https://sites.google.com/corp/view/perception-cv4arvr/blazeface#h.p_21ojPZDx3cqq) * [Model card](https://mediapipe.page.link/blazeface-mc)
### Face Mesh ### Face Mesh
* Face detection: [TFLite model](https://github.com/google/mediapipe/tree/master/mediapipe/models/face_detection_front.tflite) (see above) * Face detection: [TFLite model](https://github.com/google/mediapipe/tree/master/mediapipe/models/face_detection_front.tflite) (see above)
@ -14,12 +14,12 @@
* Paper: ["Real-time Facial Surface Geometry from Monocular Video on Mobile GPUs"](https://arxiv.org/abs/1907.06724) * Paper: ["Real-time Facial Surface Geometry from Monocular Video on Mobile GPUs"](https://arxiv.org/abs/1907.06724)
* [Google AI Blog post](https://ai.googleblog.com/2019/03/real-time-ar-self-expression-with.html) * [Google AI Blog post](https://ai.googleblog.com/2019/03/real-time-ar-self-expression-with.html)
* [TensorFlow Blog post](https://blog.tensorflow.org/2020/03/face-and-hand-tracking-in-browser-with-mediapipe-and-tensorflowjs.html) * [TensorFlow Blog post](https://blog.tensorflow.org/2020/03/face-and-hand-tracking-in-browser-with-mediapipe-and-tensorflowjs.html)
* [Model card](https://drive.google.com/file/d/1VFC_wIpw4O7xBOiTgUldl79d9LA-LsnA/view) * [Model card](https://mediapipe.page.link/facemesh-mc)
### Hand Detection and Tracking ### Hand Detection and Tracking
* Palm detection: [TFLite model](https://github.com/google/mediapipe/tree/master/mediapipe/models/palm_detection.tflite), [TF.js model](https://tfhub.dev/mediapipe/handdetector/1) * Palm detection: [TFLite model](https://github.com/google/mediapipe/tree/master/mediapipe/models/palm_detection.tflite), [TF.js model](https://tfhub.dev/mediapipe/handdetector/1)
* 3D hand landmarks: [TFLite model](https://github.com/google/mediapipe/tree/master/mediapipe/models/hand_landmark.tflite), [TF.js model](https://tfhub.dev/mediapipe/handskeleton/1) * 3D hand landmarks: [TFLite model](https://github.com/google/mediapipe/tree/master/mediapipe/models/hand_landmark.tflite), [TF.js model](https://tfhub.dev/mediapipe/handskeleton/1)
* [Google AI Blog post](https://mediapipe.page.link/handgoogleaiblog) * [Google AI Blog post](https://ai.googleblog.com/2019/08/on-device-real-time-hand-tracking-with.html)
* [TensorFlow Blog post](https://blog.tensorflow.org/2020/03/face-and-hand-tracking-in-browser-with-mediapipe-and-tensorflowjs.html) * [TensorFlow Blog post](https://blog.tensorflow.org/2020/03/face-and-hand-tracking-in-browser-with-mediapipe-and-tensorflowjs.html)
* [Model card](https://mediapipe.page.link/handmc) * [Model card](https://mediapipe.page.link/handmc)
@ -42,14 +42,14 @@
[BlazePose: On-device Real-time Body Pose Tracking](https://arxiv.org/abs/2006.10204) [BlazePose: On-device Real-time Body Pose Tracking](https://arxiv.org/abs/2006.10204)
([presentation](https://youtu.be/YPpUOTRn5tA)) ([presentation](https://youtu.be/YPpUOTRn5tA))
* Google AI Blog: * Google AI Blog:
[BlazePose - On-device Real-time Body Pose Tracking](https://mediapipe.page.link/blazepose-blog) [BlazePose - On-device Real-time Body Pose Tracking](https://ai.googleblog.com/2020/08/on-device-real-time-body-pose-tracking.html)
* [Model card](https://mediapipe.page.link/blazepose-mc) * [Model card](https://mediapipe.page.link/blazepose-mc)
### Hair Segmentation ### Hair Segmentation
* [TFLite model](https://github.com/google/mediapipe/tree/master/mediapipe/models/hair_segmentation.tflite) * [TFLite model](https://github.com/google/mediapipe/tree/master/mediapipe/models/hair_segmentation.tflite)
* [Model page](https://sites.google.com/corp/view/perception-cv4arvr/hair-segmentation) * [Model page](https://sites.google.com/corp/view/perception-cv4arvr/hair-segmentation)
* Paper: ["Real-time Hair segmentation and recoloring on Mobile GPUs"](https://arxiv.org/abs/1907.06740) * Paper: ["Real-time Hair segmentation and recoloring on Mobile GPUs"](https://arxiv.org/abs/1907.06740)
* [Model card](https://drive.google.com/file/d/1lPwJ8BD_-3UUor4LayQ0xpa_RIC_hoRh/view) * [Model card](https://mediapipe.page.link/hairsegmentation-mc)
### Objectron (3D Object Detection) ### Objectron (3D Object Detection)
* Shoes: [TFLite model](https://github.com/google/mediapipe/tree/master/mediapipe/models/object_detection_3d_sneakers.tflite) * Shoes: [TFLite model](https://github.com/google/mediapipe/tree/master/mediapipe/models/object_detection_3d_sneakers.tflite)
@ -64,6 +64,6 @@
* Up to 200 keypoints: [TFLite model](https://github.com/google/mediapipe/tree/master/mediapipe/models/knift_float.tflite) * Up to 200 keypoints: [TFLite model](https://github.com/google/mediapipe/tree/master/mediapipe/models/knift_float.tflite)
* Up to 400 keypoints: [TFLite model](https://github.com/google/mediapipe/tree/master/mediapipe/models/knift_float_400.tflite) * Up to 400 keypoints: [TFLite model](https://github.com/google/mediapipe/tree/master/mediapipe/models/knift_float_400.tflite)
* Up to 1000 keypoints: [TFLite model](https://github.com/google/mediapipe/tree/master/mediapipe/models/knift_float_1k.tflite) * Up to 1000 keypoints: [TFLite model](https://github.com/google/mediapipe/tree/master/mediapipe/models/knift_float_1k.tflite)
* [Google Developers Blog post](https://mediapipe.page.link/knift) * [Google Developers Blog post](https://developers.googleblog.com/2020/04/mediapipe-knift-template-based-feature-matching.html)
* [Model card](https://mediapipe.page.link/knift-mc) * [Model card](https://mediapipe.page.link/knift-mc)