mediapipe/README.md
MediaPipe Team de4fbc10e6 Project import generated by Copybara.
GitOrigin-RevId: 852dfb05d450167899c0dd5ef7c45622a12e865b
2020-02-10 14:13:25 -08:00

5.4 KiB

MediaPipe

MediaPipe is a framework for building multimodal (eg. video, audio, any time series data), cross platform (i.e Android, iOS, web, edge devices) applied ML pipelines. With MediaPipe, a perception pipeline can be built as a graph of modular components, including, for instance, inference models (e.g., TensorFlow, TFLite) and media processing functions.

Real-time Face Detection

"MediaPipe has made it extremely easy to build our 3D person pose reconstruction demo app, facilitating accelerated neural network inference on device and synchronization of our result visualization with the video capture stream. Highly recommended!" - George Papandreou, CTO, Ariel AI

ML Solutions in MediaPipe

face_detection multi-hand_tracking hand_tracking hair_segmentation object_tracking

Installation

Follow these instructions.

Getting started

See mobile, desktop and Google Coral examples.

Check out some web demos [Edge detection] [Face detection] [Hand Tracking]

Documentation

MediaPipe Read-the-Docs or docs.mediapipe.dev

Check out the Examples page for tutorials on how to use MediaPipe. Concepts page for basic definitions

Visualizing MediaPipe graphs

A web-based visualizer is hosted on viz.mediapipe.dev. Please also see instructions here.

Videos

Publications

Events

Community forum

  • Discuss - General community discussion around MediaPipe

Alpha Disclaimer

MediaPipe is currently in alpha for v0.6. We are still making breaking API changes and expect to get to stable API by v1.0.

Contributing

We welcome contributions. Please follow these guidelines.

We use GitHub issues for tracking requests and bugs. Please post questions to the MediaPipe Stack Overflow with a 'mediapipe' tag.