diff --git a/README.md b/README.md index 3276de974..4034efefb 100644 --- a/README.md +++ b/README.md @@ -5,7 +5,7 @@ [MediaPipe](http://g.co/mediapipe) is a framework for building multimodal (eg. video, audio, any time series data) applied ML pipelines. With MediaPipe, a perception pipeline can be built as a graph of modular components, including, for instance, inference models (e.g., TensorFlow, TFLite) and media processing functions. -![Real-time Face Detection](mediapipe/docs/images/mobile/face_detection_android_gpu_small.gif) +![Real-time Face Detection](mediapipe/docs/images/realtime_face_detection.gif) ## Installation Follow these [instructions](mediapipe/docs/install.md). @@ -14,14 +14,14 @@ Follow these [instructions](mediapipe/docs/install.md). See mobile and desktop [examples](mediapipe/docs/examples.md). ## Documentation -On [MediaPipe Read-the-Docs](https://mediapipe.readthedocs.io/). +[MediaPipe Read-the-Docs](https://mediapipe.readthedocs.io/). ## Visualizing MediaPipe graphs A web-based visualizer is hosted on [MediaPipe Visualizer](https://mediapipe-viz.appspot.com/). Please also see instructions [here](mediapipe/docs/visualizer.md). ## Publications -* [MediaPipe: A Framework for Building Perception Pipelines](https://arxiv.org/) on [arXiv](https://arxiv.org/). -* [MediaPipe: A Framework for Perceiving and Augmenting Reality](http://mixedreality.cs.cornell.edu/s/22_crv2_MediaPipe_CVPR_CV4ARVR_Workshop_2019_v2.pdf), extended abstract for [Third Workshop on Computer Vision for AR/VR](http://mixedreality.cs.cornell.edu/workshop/program). +* [MediaPipe: A Framework for Perceiving and Augmenting Reality](http://mixedreality.cs.cornell.edu/s/22_crv2_MediaPipe_CVPR_CV4ARVR_Workshop_2019_v2.pdf), extended abstract for [Third Workshop on Computer Vision for AR/VR](https://sites.google.com/corp/view/perception-cv4arvr/mediapipe). +* Full-length draft: [MediaPipe: A Framework for Building Perception Pipelines](https://tiny.cc/mediapipe_paper) ## Contributing We welcome contributions. Please follow these [guidelines](./CONTRIBUTING.md). diff --git a/mediapipe/docs/images/realtime_face_detection.gif b/mediapipe/docs/images/realtime_face_detection.gif new file mode 100644 index 000000000..a517a68d2 Binary files /dev/null and b/mediapipe/docs/images/realtime_face_detection.gif differ diff --git a/mediapipe/docs/index.rst b/mediapipe/docs/index.rst index 163f13276..7b03cf5b3 100644 --- a/mediapipe/docs/index.rst +++ b/mediapipe/docs/index.rst @@ -8,9 +8,9 @@ machine learning pipeline can be built as a graph of modular components, including, for instance, inference models and media processing functions. Sensory data such as audio and video streams enter the graph, and perceived descriptions such as object-localization and face-landmark streams exit the graph. An example -graph that performs real-time face detection on mobile GPU is shown below. +graph that performs real-time hair segmentation on mobile GPU is shown below. -.. image:: images/mobile/face_detection_android_gpu.png +.. image:: images/mobile/hair_segmentation_android_gpu.png :width: 400 :alt: Example MediaPipe graph