Project import generated by Copybara.

GitOrigin-RevId: 9295f8ea2339edb71073695ed4fb3fded2f48c60
This commit is contained in:
MediaPipe Team 2020-08-12 21:57:56 -04:00 committed by chuoling
parent 6b0ab0e012
commit d7c287c4e9
249 changed files with 10375 additions and 5347 deletions

6
MANIFEST.in Normal file
View File

@ -0,0 +1,6 @@
global-exclude .git*
global-exclude *_test.py
recursive-include mediapipe/models *.tflite *.txt
recursive-include mediapipe/modules *.tflite *.txt
recursive-include mediapipe/graphs *.binarypb

View File

@ -22,9 +22,9 @@ desktop/cloud, web and IoT devices.
## ML solutions in MediaPipe ## ML solutions in MediaPipe
Face Detection | Face Mesh | Iris | Hands Face Detection | Face Mesh | Iris 🆕 | Hands | Pose 🆕
:----------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------: | :---: :----------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------: | :----:
[![face_detection](docs/images/mobile/face_detection_android_gpu_small.gif)](https://google.github.io/mediapipe/solutions/face_detection) | [![face_mesh](docs/images/mobile/face_mesh_android_gpu_small.gif)](https://google.github.io/mediapipe/solutions/face_mesh) | [![iris](docs/images/mobile/iris_tracking_android_gpu_small.gif)](https://google.github.io/mediapipe/solutions/iris) | [![hand](docs/images/mobile/hand_tracking_android_gpu_small.gif)](https://google.github.io/mediapipe/solutions/hands) [![face_detection](docs/images/mobile/face_detection_android_gpu_small.gif)](https://google.github.io/mediapipe/solutions/face_detection) | [![face_mesh](docs/images/mobile/face_mesh_android_gpu_small.gif)](https://google.github.io/mediapipe/solutions/face_mesh) | [![iris](docs/images/mobile/iris_tracking_android_gpu_small.gif)](https://google.github.io/mediapipe/solutions/iris) | [![hand](docs/images/mobile/hand_tracking_android_gpu_small.gif)](https://google.github.io/mediapipe/solutions/hands) | [![pose](docs/images/mobile/pose_tracking_android_gpu_small.gif)](https://google.github.io/mediapipe/solutions/pose)
Hair Segmentation | Object Detection | Box Tracking | Objectron | KNIFT Hair Segmentation | Object Detection | Box Tracking | Objectron | KNIFT
:-------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------: | :---: :-------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------: | :---:
@ -33,20 +33,21 @@ Hair Segmentation
<!-- []() in the first cell is needed to preserve table formatting in GitHub Pages. --> <!-- []() in the first cell is needed to preserve table formatting in GitHub Pages. -->
<!-- Whenever this table is updated, paste a copy to solutions/solutions.md. --> <!-- Whenever this table is updated, paste a copy to solutions/solutions.md. -->
[]() | Android | iOS | Desktop | Web | Coral []() | Android | iOS | Desktop | Python | Web | Coral
:---------------------------------------------------------------------------- | :-----: | :-: | :-----: | :-: | :---: :---------------------------------------------------------------------------- | :-----: | :-: | :-----: | :----: | :-: | :---:
[Face Detection](https://google.github.io/mediapipe/solutions/face_detection) | ✅ | ✅ | ✅ | ✅ | ✅ [Face Detection](https://google.github.io/mediapipe/solutions/face_detection) | ✅ | ✅ | ✅ | | ✅ | ✅
[Face Mesh](https://google.github.io/mediapipe/solutions/face_mesh) | ✅ | ✅ | ✅ | | [Face Mesh](https://google.github.io/mediapipe/solutions/face_mesh) | ✅ | ✅ | ✅ | | |
[Iris](https://google.github.io/mediapipe/solutions/iris) | ✅ | ✅ | ✅ | ✅ | [Iris](https://google.github.io/mediapipe/solutions/iris) 🆕 | ✅ | ✅ | ✅ | | ✅ |
[Hands](https://google.github.io/mediapipe/solutions/hands) | ✅ | ✅ | ✅ | ✅ | [Hands](https://google.github.io/mediapipe/solutions/hands) | ✅ | ✅ | ✅ | | ✅ |
[Hair Segmentation](https://google.github.io/mediapipe/solutions/hair_segmentation) | ✅ | | ✅ | ✅ | [Pose](https://google.github.io/mediapipe/solutions/pose) 🆕 | ✅ | ✅ | ✅ | ✅ | ✅ |
[Object Detection](https://google.github.io/mediapipe/solutions/object_detection) | ✅ | ✅ | ✅ | | ✅ [Hair Segmentation](https://google.github.io/mediapipe/solutions/hair_segmentation) | ✅ | | ✅ | | ✅ |
[Box Tracking](https://google.github.io/mediapipe/solutions/box_tracking) | ✅ | ✅ | ✅ | | [Object Detection](https://google.github.io/mediapipe/solutions/object_detection) | ✅ | ✅ | ✅ | | | ✅
[Objectron](https://google.github.io/mediapipe/solutions/objectron) | ✅ | | | | [Box Tracking](https://google.github.io/mediapipe/solutions/box_tracking) | ✅ | ✅ | ✅ | | |
[KNIFT](https://google.github.io/mediapipe/solutions/knift) | ✅ | | | | [Objectron](https://google.github.io/mediapipe/solutions/objectron) | ✅ | | | | |
[AutoFlip](https://google.github.io/mediapipe/solutions/autoflip) | | | ✅ | | [KNIFT](https://google.github.io/mediapipe/solutions/knift) | ✅ | | | | |
[MediaSequence](https://google.github.io/mediapipe/solutions/media_sequence) | | | ✅ | | [AutoFlip](https://google.github.io/mediapipe/solutions/autoflip) | | | ✅ | | |
[YouTube 8M](https://google.github.io/mediapipe/solutions/youtube_8m) | | | ✅ | | [MediaSequence](https://google.github.io/mediapipe/solutions/media_sequence) | | | ✅ | | |
[YouTube 8M](https://google.github.io/mediapipe/solutions/youtube_8m) | | | ✅ | | |
## MediaPipe on the Web ## MediaPipe on the Web
@ -68,6 +69,7 @@ never leaves your device.
* [MediaPipe Iris: Depth-from-Iris](https://viz.mediapipe.dev/demo/iris_depth) * [MediaPipe Iris: Depth-from-Iris](https://viz.mediapipe.dev/demo/iris_depth)
* [MediaPipe Hands](https://viz.mediapipe.dev/demo/hand_tracking) * [MediaPipe Hands](https://viz.mediapipe.dev/demo/hand_tracking)
* [MediaPipe Hands (palm/hand detection only)](https://viz.mediapipe.dev/demo/hand_detection) * [MediaPipe Hands (palm/hand detection only)](https://viz.mediapipe.dev/demo/hand_detection)
* [MediaPipe Pose](https://viz.mediapipe.dev/demo/pose_tracking)
* [MediaPipe Hair Segmentation](https://viz.mediapipe.dev/demo/hair_segmentation) * [MediaPipe Hair Segmentation](https://viz.mediapipe.dev/demo/hair_segmentation)
## Getting started ## Getting started
@ -86,8 +88,10 @@ run code search using
## Publications ## Publications
* [MediaPipe Iris: Real-time Eye Tracking and Depth Estimation from a Single * [bazelPose - On-device Real-time Body Pose Tracking](https://mediapipe.page.link/bazelpose-blog)
Image](https://mediapipe.page.link/iris-blog) in Google AI Blog in Google AI Blog
* [MediaPipe Iris: Real-time Eye Tracking and Depth Estimation](https://ai.googleblog.com/2020/08/mediapipe-iris-real-time-iris-tracking.html)
in Google AI Blog
* [MediaPipe KNIFT: Template-based feature matching](https://developers.googleblog.com/2020/04/mediapipe-knift-template-based-feature-matching.html) * [MediaPipe KNIFT: Template-based feature matching](https://developers.googleblog.com/2020/04/mediapipe-knift-template-based-feature-matching.html)
in Google Developers Blog in Google Developers Blog
* [Alfred Camera: Smart camera features using MediaPipe](https://developers.googleblog.com/2020/03/alfred-camera-smart-camera-features-using-mediapipe.html) * [Alfred Camera: Smart camera features using MediaPipe](https://developers.googleblog.com/2020/03/alfred-camera-smart-camera-features-using-mediapipe.html)

View File

@ -137,6 +137,25 @@ http_archive(
urls = ["https://github.com/google/multichannel-audio-tools/archive/master.zip"], urls = ["https://github.com/google/multichannel-audio-tools/archive/master.zip"],
) )
# 2020-07-09
http_archive(
name = "pybind11_bazel",
strip_prefix = "pybind11_bazel-203508e14aab7309892a1c5f7dd05debda22d9a5",
urls = ["https://github.com/pybind/pybind11_bazel/archive/203508e14aab7309892a1c5f7dd05debda22d9a5.zip"],
sha256 = "75922da3a1bdb417d820398eb03d4e9bd067c4905a4246d35a44c01d62154d91",
)
http_archive(
name = "pybind11",
urls = [
"https://storage.googleapis.com/mirror.tensorflow.org/github.com/pybind/pybind11/archive/v2.4.3.tar.gz",
"https://github.com/pybind/pybind11/archive/v2.4.3.tar.gz",
],
sha256 = "1eed57bc6863190e35637290f97a20c81cfe4d9090ac0a24f3bbf08f265eb71d",
strip_prefix = "pybind11-2.4.3",
build_file = "@pybind11_bazel//:pybind11.BUILD",
)
http_archive( http_archive(
name = "ceres_solver", name = "ceres_solver",
url = "https://github.com/ceres-solver/ceres-solver/archive/1.14.0.zip", url = "https://github.com/ceres-solver/ceres-solver/archive/1.14.0.zip",

View File

@ -58,6 +58,9 @@ apps="${app_dir}/*"
for app in ${apps}; do for app in ${apps}; do
if [[ -d "${app}" ]]; then if [[ -d "${app}" ]]; then
target_name=${app##*/} target_name=${app##*/}
if [[ "${target_name}" == "common" ]]; then
continue
fi
target="${app}:${target_name}" target="${app}:${target_name}"
echo "=== Target: ${target}" echo "=== Target: ${target}"

View File

@ -422,3 +422,73 @@ Note: This currently works only on Linux, and please first follow
This will open up your webcam as long as it is connected and on. Any errors This will open up your webcam as long as it is connected and on. Any errors
is likely due to your webcam being not accessible, or GPU drivers not setup is likely due to your webcam being not accessible, or GPU drivers not setup
properly. properly.
## Python
### Prerequisite
1. Make sure that Bazel and OpenCV are correctly installed and configured for
MediaPipe. Please see [Installation](./install.md) for how to setup Bazel
and OpenCV for MediaPipe on Linux and macOS.
2. Install the following dependencies.
```bash
# Debian or Ubuntu
$ sudo apt install python3-dev
$ sudo apt install python3-venv
$ sudo apt install -y protobuf-compiler
```
```bash
# macOS
$ brew install protobuf
```
### Set up Python virtual environment.
1. Activate a Python virtual environment.
```bash
$ python3 -m venv mp_env && source mp_env/bin/activate
```
2. In the virtual environment, go to the MediaPipe repo directory.
3. Install the required Python packages.
```bash
(mp_env)mediapipe$ pip3 install -r requirements.txt
```
4. Generate and install MediaPipe package.
```bash
(mp_env)mediapipe$ python3 setup.py gen_protos
(mp_env)mediapipe$ python3 setup.py install
```
### Run in Python interpreter
Make sure you are not in the MediaPipe repo directory.
Using [MediaPipe Pose](../solutions/pose.md) as an example:
```bash
(mp_env)$ python3
>>> import mediapipe as mp
>>> pose_tracker = mp.examples.UpperBodyPoseTracker()
# For image input
>>> pose_landmarks, _ = pose_tracker.run(input_file='/path/to/input/file', output_file='/path/to/output/file')
>>> pose_landmarks, annotated_image = pose_tracker.run(input_file='/path/to/file')
# For live camera input
# (Press Esc within the output image window to stop the run or let it self terminate after 30 seconds.)
>>> pose_tracker.run_live()
# Close the tracker.
>>> pose_tracker.close()
```
Tip: Use command `deactivate` to exit the Python virtual environment.

View File

@ -18,8 +18,8 @@ This codelab uses MediaPipe on an iOS device.
### What you will learn ### What you will learn
How to develop an iOS application that uses MediaPipe and run a MediaPipe How to develop an iOS application that uses MediaPipe and run a MediaPipe graph
graph on iOS. on iOS.
### What you will build ### What you will build
@ -42,8 +42,8 @@ We will be using the following graph, [`edge_detection_mobile_gpu.pbtxt`]:
``` ```
# MediaPipe graph that performs GPU Sobel edge detection on a live video stream. # MediaPipe graph that performs GPU Sobel edge detection on a live video stream.
# Used in the examples # Used in the examples
# mediapipe/examples/android/src/java/com/mediapipe/apps/edgedetectiongpu. # mediapipe/examples/android/src/java/com/google/mediapipe/apps/basic:helloworld
# mediapipe/examples/ios/edgedetectiongpu. # and mediapipe/examples/ios/helloworld.
# Images coming into and out of the graph. # Images coming into and out of the graph.
input_stream: "input_video" input_stream: "input_video"
@ -89,21 +89,21 @@ to build it.
First, create an XCode project via File > New > Single View App. First, create an XCode project via File > New > Single View App.
Set the product name to "EdgeDetectionGpu", and use an appropriate organization Set the product name to "HelloWorld", and use an appropriate organization
identifier, such as `com.google.mediapipe`. The organization identifier identifier, such as `com.google.mediapipe`. The organization identifier
alongwith the product name will be the `bundle_id` for the application, such as alongwith the product name will be the `bundle_id` for the application, such as
`com.google.mediapipe.EdgeDetectionGpu`. `com.google.mediapipe.HelloWorld`.
Set the language to Objective-C. Set the language to Objective-C.
Save the project to an appropriate location. Let's call this Save the project to an appropriate location. Let's call this
`$PROJECT_TEMPLATE_LOC`. So your project will be in the `$PROJECT_TEMPLATE_LOC`. So your project will be in the
`$PROJECT_TEMPLATE_LOC/EdgeDetectionGpu` directory. This directory will contain `$PROJECT_TEMPLATE_LOC/HelloWorld` directory. This directory will contain
another directory named `EdgeDetectionGpu` and an `EdgeDetectionGpu.xcodeproj` file. another directory named `HelloWorld` and an `HelloWorld.xcodeproj` file.
The `EdgeDetectionGpu.xcodeproj` will not be useful for this tutorial, as we will The `HelloWorld.xcodeproj` will not be useful for this tutorial, as we will use
use bazel to build the iOS application. The content of the bazel to build the iOS application. The content of the
`$PROJECT_TEMPLATE_LOC/EdgeDetectionGpu/EdgeDetectionGpu` directory is listed below: `$PROJECT_TEMPLATE_LOC/HelloWorld/HelloWorld` directory is listed below:
1. `AppDelegate.h` and `AppDelegate.m` 1. `AppDelegate.h` and `AppDelegate.m`
2. `ViewController.h` and `ViewController.m` 2. `ViewController.h` and `ViewController.m`
@ -112,10 +112,10 @@ use bazel to build the iOS application. The content of the
5. `Main.storyboard` and `Launch.storyboard` 5. `Main.storyboard` and `Launch.storyboard`
6. `Assets.xcassets` directory. 6. `Assets.xcassets` directory.
Copy these files to a directory named `EdgeDetectionGpu` to a location that can Copy these files to a directory named `HelloWorld` to a location that can access
access the MediaPipe source code. For example, the source code of the the MediaPipe source code. For example, the source code of the application that
application that we will build in this tutorial is located in we will build in this tutorial is located in
`mediapipe/examples/ios/EdgeDetectionGpu`. We will refer to this path as the `mediapipe/examples/ios/HelloWorld`. We will refer to this path as the
`$APPLICATION_PATH` throughout the codelab. `$APPLICATION_PATH` throughout the codelab.
Note: MediaPipe provides Objective-C bindings for iOS. The edge detection Note: MediaPipe provides Objective-C bindings for iOS. The edge detection
@ -134,8 +134,8 @@ load(
) )
ios_application( ios_application(
name = "EdgeDetectionGpuApp", name = "HelloWorldApp",
bundle_id = "com.google.mediapipe.EdgeDetectionGpu", bundle_id = "com.google.mediapipe.HelloWorld",
families = [ families = [
"iphone", "iphone",
"ipad", "ipad",
@ -143,11 +143,11 @@ ios_application(
infoplists = ["Info.plist"], infoplists = ["Info.plist"],
minimum_os_version = MIN_IOS_VERSION, minimum_os_version = MIN_IOS_VERSION,
provisioning_profile = "//mediapipe/examples/ios:developer_provisioning_profile", provisioning_profile = "//mediapipe/examples/ios:developer_provisioning_profile",
deps = [":EdgeDetectionGpuAppLibrary"], deps = [":HelloWorldAppLibrary"],
) )
objc_library( objc_library(
name = "EdgeDetectionGpuAppLibrary", name = "HelloWorldAppLibrary",
srcs = [ srcs = [
"AppDelegate.m", "AppDelegate.m",
"ViewController.m", "ViewController.m",
@ -172,9 +172,8 @@ The `objc_library` rule adds dependencies for the `AppDelegate` and
`ViewController` classes, `main.m` and the application storyboards. The `ViewController` classes, `main.m` and the application storyboards. The
templated app depends only on the `UIKit` SDK. templated app depends only on the `UIKit` SDK.
The `ios_application` rule uses the `EdgeDetectionGpuAppLibrary` Objective-C The `ios_application` rule uses the `HelloWorldAppLibrary` Objective-C library
library generated to build an iOS application for installation on your iOS generated to build an iOS application for installation on your iOS device.
device.
Note: You need to point to your own iOS developer provisioning profile to be Note: You need to point to your own iOS developer provisioning profile to be
able to run the application on your iOS device. able to run the application on your iOS device.
@ -182,21 +181,20 @@ able to run the application on your iOS device.
To build the app, use the following command in a terminal: To build the app, use the following command in a terminal:
``` ```
bazel build -c opt --config=ios_arm64 <$APPLICATION_PATH>:EdgeDetectionGpuApp' bazel build -c opt --config=ios_arm64 <$APPLICATION_PATH>:HelloWorldApp'
``` ```
For example, to build the `EdgeDetectionGpuApp` application in For example, to build the `HelloWorldApp` application in
`mediapipe/examples/ios/edgedetectiongpu`, use the following `mediapipe/examples/ios/helloworld`, use the following command:
command:
``` ```
bazel build -c opt --config=ios_arm64 mediapipe/examples/ios/edgedetectiongpu:EdgeDetectionGpuApp bazel build -c opt --config=ios_arm64 mediapipe/examples/ios/helloworld:HelloWorldApp
``` ```
Then, go back to XCode, open Window > Devices and Simulators, select your Then, go back to XCode, open Window > Devices and Simulators, select your
device, and add the `.ipa` file generated by the command above to your device. device, and add the `.ipa` file generated by the command above to your device.
Here is the document on [setting up and compiling](./building_examples.md#ios) iOS Here is the document on [setting up and compiling](./building_examples.md#ios)
MediaPipe apps. iOS MediaPipe apps.
Open the application on your device. Since it is empty, it should display a Open the application on your device. Since it is empty, it should display a
blank white screen. blank white screen.
@ -502,8 +500,8 @@ in our app:
}]; }];
``` ```
Note: It is important to start the graph before starting the camera, so that Note: It is important to start the graph before starting the camera, so that the
the graph is ready to process frames as soon as the camera starts sending them. graph is ready to process frames as soon as the camera starts sending them.
Earlier, when we received frames from the camera in the `processVideoFrame` Earlier, when we received frames from the camera in the `processVideoFrame`
function, we displayed them in the `_liveView` using the `_renderer`. Now, we function, we displayed them in the `_liveView` using the `_renderer`. Now, we
@ -552,9 +550,12 @@ results of running the edge detection graph on a live video feed. Congrats!
![edge_detection_ios_gpu_gif](../images/mobile/edge_detection_ios_gpu.gif) ![edge_detection_ios_gpu_gif](../images/mobile/edge_detection_ios_gpu.gif)
If you ran into any issues, please see the full code of the tutorial Please note that the iOS examples now use a [common] template app. The code in
[here](https://github.com/google/mediapipe/tree/master/mediapipe/examples/ios/edgedetectiongpu). this tutorial is used in the [common] template app. The [helloworld] app has the
appropriate `BUILD` file dependencies for the edge detection graph.
[Bazel]:https://bazel.build/ [Bazel]:https://bazel.build/
[`edge_detection_mobile_gpu.pbtxt`]:https://github.com/google/mediapipe/tree/master/mediapipe/graphs/object_detection/object_detection_mobile_gpu.pbtxt [`edge_detection_mobile_gpu.pbtxt`]:https://github.com/google/mediapipe/tree/master/mediapipe/graphs/edge_detection/edge_detection_mobile_gpu.pbtxt
[MediaPipe installation guide]:./install.md [MediaPipe installation guide]:./install.md
[common]:(https://github.com/google/mediapipe/tree/master/mediapipe/examples/ios/common)
[helloworld]:(https://github.com/google/mediapipe/tree/master/mediapipe/examples/ios/helloworld)

View File

@ -27,13 +27,14 @@ Repository command failed
usually indicates that Bazel fails to find the local Python binary. To solve usually indicates that Bazel fails to find the local Python binary. To solve
this issue, please first find where the python binary is and then add this issue, please first find where the python binary is and then add
`--action_env PYTHON_BIN_PATH=<path to python binary>` to the Bazel command like `--action_env PYTHON_BIN_PATH=<path to python binary>` to the Bazel command. For
the following: example, you can switch to use the system default python3 binary by the
following command:
``` ```
bazel build -c opt \ bazel build -c opt \
--define MEDIAPIPE_DISABLE_GPU=1 \ --define MEDIAPIPE_DISABLE_GPU=1 \
--action_env PYTHON_BIN_PATH="/path/to/python" \ --action_env PYTHON_BIN_PATH=$(which python3) \
mediapipe/examples/desktop/hello_world mediapipe/examples/desktop/hello_world
``` ```

Binary file not shown.

After

Width:  |  Height:  |  Size: 996 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 313 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.9 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 224 KiB

View File

@ -22,9 +22,9 @@ desktop/cloud, web and IoT devices.
## ML solutions in MediaPipe ## ML solutions in MediaPipe
Face Detection | Face Mesh | Iris | Hands Face Detection | Face Mesh | Iris 🆕 | Hands | Pose 🆕
:----------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------: | :---: :----------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------: | :----:
[![face_detection](images/mobile/face_detection_android_gpu_small.gif)](https://google.github.io/mediapipe/solutions/face_detection) | [![face_mesh](images/mobile/face_mesh_android_gpu_small.gif)](https://google.github.io/mediapipe/solutions/face_mesh) | [![iris](images/mobile/iris_tracking_android_gpu_small.gif)](https://google.github.io/mediapipe/solutions/iris) | [![hand](images/mobile/hand_tracking_android_gpu_small.gif)](https://google.github.io/mediapipe/solutions/hands) [![face_detection](images/mobile/face_detection_android_gpu_small.gif)](https://google.github.io/mediapipe/solutions/face_detection) | [![face_mesh](images/mobile/face_mesh_android_gpu_small.gif)](https://google.github.io/mediapipe/solutions/face_mesh) | [![iris](images/mobile/iris_tracking_android_gpu_small.gif)](https://google.github.io/mediapipe/solutions/iris) | [![hand](images/mobile/hand_tracking_android_gpu_small.gif)](https://google.github.io/mediapipe/solutions/hands) | [![pose](images/mobile/pose_tracking_android_gpu_small.gif)](https://google.github.io/mediapipe/solutions/pose)
Hair Segmentation | Object Detection | Box Tracking | Objectron | KNIFT Hair Segmentation | Object Detection | Box Tracking | Objectron | KNIFT
:-------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------: | :---: :-------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------: | :---:
@ -33,20 +33,21 @@ Hair Segmentation
<!-- []() in the first cell is needed to preserve table formatting in GitHub Pages. --> <!-- []() in the first cell is needed to preserve table formatting in GitHub Pages. -->
<!-- Whenever this table is updated, paste a copy to solutions/solutions.md. --> <!-- Whenever this table is updated, paste a copy to solutions/solutions.md. -->
[]() | Android | iOS | Desktop | Web | Coral []() | Android | iOS | Desktop | Python | Web | Coral
:---------------------------------------------------------------------------- | :-----: | :-: | :-----: | :-: | :---: :---------------------------------------------------------------------------- | :-----: | :-: | :-----: | :----: | :-: | :---:
[Face Detection](https://google.github.io/mediapipe/solutions/face_detection) | ✅ | ✅ | ✅ | ✅ | ✅ [Face Detection](https://google.github.io/mediapipe/solutions/face_detection) | ✅ | ✅ | ✅ | | ✅ | ✅
[Face Mesh](https://google.github.io/mediapipe/solutions/face_mesh) | ✅ | ✅ | ✅ | | [Face Mesh](https://google.github.io/mediapipe/solutions/face_mesh) | ✅ | ✅ | ✅ | | |
[Iris](https://google.github.io/mediapipe/solutions/iris) | ✅ | ✅ | ✅ | ✅ | [Iris](https://google.github.io/mediapipe/solutions/iris) 🆕 | ✅ | ✅ | ✅ | | ✅ |
[Hands](https://google.github.io/mediapipe/solutions/hands) | ✅ | ✅ | ✅ | ✅ | [Hands](https://google.github.io/mediapipe/solutions/hands) | ✅ | ✅ | ✅ | | ✅ |
[Hair Segmentation](https://google.github.io/mediapipe/solutions/hair_segmentation) | ✅ | | ✅ | ✅ | [Pose](https://google.github.io/mediapipe/solutions/pose) 🆕 | ✅ | ✅ | ✅ | ✅ | ✅ |
[Object Detection](https://google.github.io/mediapipe/solutions/object_detection) | ✅ | ✅ | ✅ | | ✅ [Hair Segmentation](https://google.github.io/mediapipe/solutions/hair_segmentation) | ✅ | | ✅ | | ✅ |
[Box Tracking](https://google.github.io/mediapipe/solutions/box_tracking) | ✅ | ✅ | ✅ | | [Object Detection](https://google.github.io/mediapipe/solutions/object_detection) | ✅ | ✅ | ✅ | | | ✅
[Objectron](https://google.github.io/mediapipe/solutions/objectron) | ✅ | | | | [Box Tracking](https://google.github.io/mediapipe/solutions/box_tracking) | ✅ | ✅ | ✅ | | |
[KNIFT](https://google.github.io/mediapipe/solutions/knift) | ✅ | | | | [Objectron](https://google.github.io/mediapipe/solutions/objectron) | ✅ | | | | |
[AutoFlip](https://google.github.io/mediapipe/solutions/autoflip) | | | ✅ | | [KNIFT](https://google.github.io/mediapipe/solutions/knift) | ✅ | | | | |
[MediaSequence](https://google.github.io/mediapipe/solutions/media_sequence) | | | ✅ | | [AutoFlip](https://google.github.io/mediapipe/solutions/autoflip) | | | ✅ | | |
[YouTube 8M](https://google.github.io/mediapipe/solutions/youtube_8m) | | | ✅ | | [MediaSequence](https://google.github.io/mediapipe/solutions/media_sequence) | | | ✅ | | |
[YouTube 8M](https://google.github.io/mediapipe/solutions/youtube_8m) | | | ✅ | | |
## MediaPipe on the Web ## MediaPipe on the Web
@ -68,6 +69,7 @@ never leaves your device.
* [MediaPipe Iris: Depth-from-Iris](https://viz.mediapipe.dev/demo/iris_depth) * [MediaPipe Iris: Depth-from-Iris](https://viz.mediapipe.dev/demo/iris_depth)
* [MediaPipe Hands](https://viz.mediapipe.dev/demo/hand_tracking) * [MediaPipe Hands](https://viz.mediapipe.dev/demo/hand_tracking)
* [MediaPipe Hands (palm/hand detection only)](https://viz.mediapipe.dev/demo/hand_detection) * [MediaPipe Hands (palm/hand detection only)](https://viz.mediapipe.dev/demo/hand_detection)
* [MediaPipe Pose](https://viz.mediapipe.dev/demo/pose_tracking)
* [MediaPipe Hair Segmentation](https://viz.mediapipe.dev/demo/hair_segmentation) * [MediaPipe Hair Segmentation](https://viz.mediapipe.dev/demo/hair_segmentation)
## Getting started ## Getting started
@ -86,8 +88,10 @@ run code search using
## Publications ## Publications
* [MediaPipe Iris: Real-time Eye Tracking and Depth Estimation from a Single * [bazelPose - On-device Real-time Body Pose Tracking](https://mediapipe.page.link/bazelpose-blog)
Image](https://mediapipe.page.link/iris-blog) in Google AI Blog in Google AI Blog
* [MediaPipe Iris: Real-time Eye Tracking and Depth Estimation](https://ai.googleblog.com/2020/08/mediapipe-iris-real-time-iris-tracking.html)
in Google AI Blog
* [MediaPipe KNIFT: Template-based feature matching](https://developers.googleblog.com/2020/04/mediapipe-knift-template-based-feature-matching.html) * [MediaPipe KNIFT: Template-based feature matching](https://developers.googleblog.com/2020/04/mediapipe-knift-template-based-feature-matching.html)
in Google Developers Blog in Google Developers Blog
* [Alfred Camera: Smart camera features using MediaPipe](https://developers.googleblog.com/2020/03/alfred-camera-smart-camera-features-using-mediapipe.html) * [Alfred Camera: Smart camera features using MediaPipe](https://developers.googleblog.com/2020/03/alfred-camera-smart-camera-features-using-mediapipe.html)

View File

@ -2,7 +2,7 @@
layout: default layout: default
title: AutoFlip (Saliency-aware Video Cropping) title: AutoFlip (Saliency-aware Video Cropping)
parent: Solutions parent: Solutions
nav_order: 10 nav_order: 11
--- ---
# AutoFlip: Saliency-aware Video Cropping # AutoFlip: Saliency-aware Video Cropping

View File

@ -2,7 +2,7 @@
layout: default layout: default
title: Box Tracking title: Box Tracking
parent: Solutions parent: Solutions
nav_order: 7 nav_order: 8
--- ---
# MediaPipe Box Tracking # MediaPipe Box Tracking

View File

@ -107,4 +107,4 @@ to cross-compile and run MediaPipe examples on the
[TFLite model quantized for EdgeTPU/Coral](https://github.com/google/mediapipe/tree/master/mediapipe/examples/coral/models/face-detector-quantized_edgetpu.tflite) [TFLite model quantized for EdgeTPU/Coral](https://github.com/google/mediapipe/tree/master/mediapipe/examples/coral/models/face-detector-quantized_edgetpu.tflite)
* For back-facing camera: * For back-facing camera:
[TFLite model ](https://github.com/google/mediapipe/tree/master/mediapipe/models/face_detection_back.tflite) [TFLite model ](https://github.com/google/mediapipe/tree/master/mediapipe/models/face_detection_back.tflite)
* [Model card](https://drive.google.com/file/d/1f39lSzU5Oq-j_OXgS67KfN5wNsoeAZ4V/view) * [Model card](https://mediapipe.page.link/blazeface-mc)

View File

@ -125,7 +125,7 @@ Tip: Maximum number of faces to detect/process is set to 1 by default. To change
it, for Android modify `NUM_FACES` in it, for Android modify `NUM_FACES` in
[MainActivity.java](https://github.com/google/mediapipe/tree/master/mediapipe/examples/android/src/java/com/google/mediapipe/apps/facemeshgpu/MainActivity.java), [MainActivity.java](https://github.com/google/mediapipe/tree/master/mediapipe/examples/android/src/java/com/google/mediapipe/apps/facemeshgpu/MainActivity.java),
and for iOS modify `kNumFaces` in and for iOS modify `kNumFaces` in
[ViewController.mm](https://github.com/google/mediapipe/tree/master/mediapipe/examples/ios/facemeshgpu/ViewController.mm). [FaceMeshGpuViewController.mm](https://github.com/google/mediapipe/tree/master/mediapipe/examples/ios/facemeshgpu/FaceMeshGpuViewController.mm).
### Desktop ### Desktop
@ -157,4 +157,4 @@ it, in the graph file modify the option of `ConstantSidePacketCalculator`.
* Face landmark model: * Face landmark model:
[TFLite model](https://github.com/google/mediapipe/tree/master/mediapipe/modules/face_landmark/face_landmark.tflite), [TFLite model](https://github.com/google/mediapipe/tree/master/mediapipe/modules/face_landmark/face_landmark.tflite),
[TF.js model](https://tfhub.dev/mediapipe/facemesh/1) [TF.js model](https://tfhub.dev/mediapipe/facemesh/1)
* [Model card](https://drive.google.com/file/d/1VFC_wIpw4O7xBOiTgUldl79d9LA-LsnA/view) * [Model card](https://mediapipe.page.link/facemesh-mc)

View File

@ -2,7 +2,7 @@
layout: default layout: default
title: Hair Segmentation title: Hair Segmentation
parent: Solutions parent: Solutions
nav_order: 5 nav_order: 6
--- ---
# MediaPipe Hair Segmentation # MediaPipe Hair Segmentation
@ -55,4 +55,4 @@ Please refer to [these instructions](../index.md#mediapipe-on-the-web).
([presentation](https://drive.google.com/file/d/1C8WYlWdDRNtU1_pYBvkkG5Z5wqYqf0yj/view)) ([presentation](https://drive.google.com/file/d/1C8WYlWdDRNtU1_pYBvkkG5Z5wqYqf0yj/view))
([supplementary video](https://drive.google.com/file/d/1LPtM99Ch2ogyXYbDNpEqnUfhFq0TfLuf/view)) ([supplementary video](https://drive.google.com/file/d/1LPtM99Ch2ogyXYbDNpEqnUfhFq0TfLuf/view))
* [TFLite model](https://github.com/google/mediapipe/tree/master/mediapipe/models/hair_segmentation.tflite) * [TFLite model](https://github.com/google/mediapipe/tree/master/mediapipe/models/hair_segmentation.tflite)
* [Model card](https://drive.google.com/file/d/1lPwJ8BD_-3UUor4LayQ0xpa_RIC_hoRh/view) * [Model card](https://mediapipe.page.link/hairsegmentation-mc)

View File

@ -102,7 +102,7 @@ camera with less than 10% error, without requiring any specialized hardware.
This is done by relying on the fact that the horizontal iris diameter of the This is done by relying on the fact that the horizontal iris diameter of the
human eye remains roughly constant at 11.7±0.5 mm across a wide population, human eye remains roughly constant at 11.7±0.5 mm across a wide population,
along with some simple geometric arguments. For more details please refer to our along with some simple geometric arguments. For more details please refer to our
[Google AI Blog post](https://mediapipe.page.link/iris-blog). [Google AI Blog post](https://ai.googleblog.com/2020/08/mediapipe-iris-real-time-iris-tracking.html).
![iris_tracking_depth_from_iris.gif](../images/mobile/iris_tracking_depth_from_iris.gif) | ![iris_tracking_depth_from_iris.gif](../images/mobile/iris_tracking_depth_from_iris.gif) |
:--------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------: |
@ -189,8 +189,8 @@ Please refer to [these instructions](../index.md#mediapipe-on-the-web).
## Resources ## Resources
* Google AI Blog: [MediaPipe Iris: Real-time Eye Tracking and Depth Estimation * Google AI Blog:
from a Single Image](https://mediapipe.page.link/iris-blog) [MediaPipe Iris: Real-time Eye Tracking and Depth Estimation](https://ai.googleblog.com/2020/08/mediapipe-iris-real-time-iris-tracking.html)
* Paper: * Paper:
[Real-time Pupil Tracking from Monocular Video for Digital Puppetry](https://arxiv.org/abs/2006.11341) [Real-time Pupil Tracking from Monocular Video for Digital Puppetry](https://arxiv.org/abs/2006.11341)
([presentation](https://youtu.be/cIhXkiiapQI)) ([presentation](https://youtu.be/cIhXkiiapQI))

View File

@ -2,7 +2,7 @@
layout: default layout: default
title: KNIFT (Template-based Feature Matching) title: KNIFT (Template-based Feature Matching)
parent: Solutions parent: Solutions
nav_order: 9 nav_order: 10
--- ---
# MediaPipe KNIFT # MediaPipe KNIFT

View File

@ -2,7 +2,7 @@
layout: default layout: default
title: Dataset Preparation with MediaSequence title: Dataset Preparation with MediaSequence
parent: Solutions parent: Solutions
nav_order: 11 nav_order: 12
--- ---
# Dataset Preparation with MediaSequence # Dataset Preparation with MediaSequence

View File

@ -2,7 +2,7 @@
layout: default layout: default
title: Object Detection title: Object Detection
parent: Solutions parent: Solutions
nav_order: 6 nav_order: 7
--- ---
# MediaPipe Object Detection # MediaPipe Object Detection

View File

@ -2,7 +2,7 @@
layout: default layout: default
title: Objectron (3D Object Detection) title: Objectron (3D Object Detection)
parent: Solutions parent: Solutions
nav_order: 8 nav_order: 9
--- ---
# MediaPipe Objectron # MediaPipe Objectron

179
docs/solutions/pose.md Normal file
View File

@ -0,0 +1,179 @@
---
layout: default
title: Pose
parent: Solutions
nav_order: 5
---
# MediaPipe Pose
{: .no_toc }
1. TOC
{:toc}
---
## Overview
Human pose estimation from video plays a critical role in various applications
such as quantifying physical exercises, sign language recognition, and full-body
gesture control. For example, it can form the basis for yoga, dance, and fitness
applications. It can also enable the overlay of digital content and information
on top of the physical world in augmented reality.
MediaPipe Pose is a ML solution for high-fidelity upper-body pose tracking,
inferring 25 2D upper-body landmarks from RGB video frames utilizing our
[BlazePose](https://mediapipe.page.link/blazepose-blog) research. Current
state-of-the-art approaches rely primarily on powerful desktop environments for
inference, whereas our method achieves real-time performance on most modern
[mobile phones](#mobile), [desktops/laptops](#desktop), in [python](#python) and
even on the [web](#web). A variant of MediaPipe Pose that performs full-body
pose tracking on mobile phones will be included in an upcoming release of
[ML Kit](https://developers.google.com/ml-kit/early-access/pose-detection).
![pose_tracking_upper_body_example.gif](../images/mobile/pose_tracking_upper_body_example.gif) |
:--------------------------------------------------------------------------------------------: |
*Fig 1. Example of MediaPipe Pose for upper-body pose tracking.* |
## ML Pipeline
The solution utilizes a two-step detector-tracker ML pipeline, proven to be
effective in our [MediaPipe Hands](./hands.md) and
[MediaPipe Face Mesh](./face_mesh.md) solutions. Using a detector, the pipeline
first locates the pose region-of-interest (ROI) within the frame. The tracker
subsequently predicts the pose landmarks within the ROI using the ROI-cropped
frame as input. Note that for video use cases the detector is invoked only as
needed, i.e., for the very first frame and when the tracker could no longer
identify body pose presence in the previous frame. For other frames the pipeline
simply derives the ROI from the previous frames pose landmarks.
The pipeline is implemented as a MediaPipe
[graph](https://github.com/google/mediapipe/tree/master/mediapipe/graphs/pose_tracking/upper_body_pose_tracking_gpu.pbtxt)
that uses a
[pose landmark subgraph](https://github.com/google/mediapipe/tree/master/mediapipe/modules/pose_landmark/pose_landmark_upper_body_gpu.pbtxt)
from the
[pose landmark module](https://github.com/google/mediapipe/tree/master/mediapipe/modules/pose_landmark)
and renders using a dedicated
[upper-body pose renderer subgraph](https://github.com/google/mediapipe/tree/master/mediapipe/graphs/pose_tracking/subgraphs/upper_body_pose_renderer_gpu.pbtxt).
The
[pose landmark subgraph](https://github.com/google/mediapipe/tree/master/mediapipe/modules/pose_landmark/pose_landmark_upper_body_gpu.pbtxt)
internally uses a
[pose detection subgraph](https://github.com/google/mediapipe/tree/master/mediapipe/modules/pose_detection/pose_detection_gpu.pbtxt)
from the
[pose detection module](https://github.com/google/mediapipe/tree/master/mediapipe/modules/pose_detection).
Note: To visualize a graph, copy the graph and paste it into
[MediaPipe Visualizer](https://viz.mediapipe.dev/). For more information on how
to visualize its associated subgraphs, please see
[visualizer documentation](../tools/visualizer.md).
## Models
### Pose Detection Model (BlazePose Detector)
The detector is inspired by our own lightweight
[BlazeFace](https://arxiv.org/abs/1907.05047) model, used in
[MediaPipe Face Detection](./face_detection.md), as a proxy for a person
detector. It explicitly predicts two additional virtual keypoints that firmly
describe the human body center, rotation and scale as a circle. Inspired by
[Leonardos Vitruvian man](https://en.wikipedia.org/wiki/Vitruvian_Man), we
predict the midpoint of a person's hips, the radius of a circle circumscribing
the whole person, and the incline angle of the line connecting the shoulder and
hip midpoints.
![pose_tracking_detector_vitruvian_man.png](../images/mobile/pose_tracking_detector_vitruvian_man.png) |
:----------------------------------------------------------------------------------------------------: |
*Fig 2. Vitruvian man aligned via two virtual keypoints predicted by BlazePose detector in addition to the face bounding box.* |
### Pose Landmark Model (BlazePose Tracker)
The landmark model currently included in MediaPipe Pose predicts the location of
25 upper-body landmarks (see figure below), with three degrees of freedom each
(x, y location and visibility), plus two virtual alignment keypoints. It shares
the same architecture as the full-body version that predicts 33 landmarks,
described in more detail in the
[BlazePose Google AI Blog](https://mediapipe.page.link/blazepose-blog) and in
this [paper](https://arxiv.org/abs/2006.10204).
![pose_tracking_upper_body_landmarks.png](../images/mobile/pose_tracking_upper_body_landmarks.png) |
:------------------------------------------------------------------------------------------------: |
*Fig 3. 25 upper-body pose landmarks.* |
## Example Apps
Please first see general instructions for
[Android](../getting_started/building_examples.md#android),
[iOS](../getting_started/building_examples.md#ios),
[desktop](../getting_started/building_examples.md#desktop) and
[Python](../getting_started/building_examples.md#python) on how to build
MediaPipe examples.
Note: To visualize a graph, copy the graph and paste it into
[MediaPipe Visualizer](https://viz.mediapipe.dev/). For more information on how
to visualize its associated subgraphs, please see
[visualizer documentation](../tools/visualizer.md).
### Mobile
* Graph:
[`mediapipe/graphs/pose_tracking/upper_body_pose_tracking_gpu.pbtxt`](https://github.com/google/mediapipe/tree/master/mediapipe/graphs/pose_tracking/upper_body_pose_tracking_gpu.pbtxt)
* Android target:
[(or download prebuilt ARM64 APK)](https://drive.google.com/file/d/1uKc6T7KSuA0Mlq2URi5YookHu0U3yoh_/view?usp=sharing)
[`mediapipe/examples/android/src/java/com/google/mediapipe/apps/upperbodyposetrackinggpu:upperbodyposetrackinggpu`](https://github.com/google/mediapipe/tree/master/mediapipe/examples/android/src/java/com/google/mediapipe/apps/upperbodyposetrackinggpu/BUILD)
* iOS target:
[`mediapipe/examples/ios/upperbodyposetrackinggpu:UpperBodyPoseTrackingGpuApp`](http:/mediapipe/examples/ios/upperbodyposetrackinggpu/BUILD)
### Desktop
Please first see general instructions for
[desktop](../getting_started/building_examples.md#desktop) on how to build
MediaPipe examples.
* Running on CPU
* Graph:
[`mediapipe/graphs/pose_tracking/upper_body_pose_tracking_cpu.pbtxt`](https://github.com/google/mediapipe/tree/master/mediapipe/graphs/pose_tracking/upper_body_pose_tracking_cpu.pbtxt)
* Target:
[`mediapipe/examples/desktop/upper_body_pose_tracking:upper_body_pose_tracking_cpu`](https://github.com/google/mediapipe/tree/master/mediapipe/examples/desktop/upper_body_pose_tracking/BUILD)
* Running on GPU
* Graph:
[`mediapipe/graphs/pose_tracking/upper_body_pose_tracking_gpu.pbtxt`](https://github.com/google/mediapipe/tree/master/mediapipe/graphs/pose_tracking/upper_body_pose_tracking_gpu.pbtxt)
* Target:
[`mediapipe/examples/desktop/upper_body_pose_tracking:upper_body_pose_tracking_gpu`](https://github.com/google/mediapipe/tree/master/mediapipe/examples/desktop/upper_body_pose_tracking/BUILD)
### Python
Please first see general instructions for
[Python](../getting_started/building_examples.md#python) examples.
```bash
(mp_env)$ python3
>>> import mediapipe as mp
>>> pose_tracker = mp.examples.UpperBodyPoseTracker()
# For image input
>>> pose_landmarks, _ = pose_tracker.run(input_file='/path/to/input/file', output_file='/path/to/output/file')
>>> pose_landmarks, annotated_image = pose_tracker.run(input_file='/path/to/file')
# For live camera input
# (Press Esc within the output image window to stop the run or let it self terminate after 30 seconds.)
>>> pose_tracker.run_live()
# Close the tracker.
>>> pose_tracker.close()
```
### Web
Please refer to [these instructions](../index.md#mediapipe-on-the-web).
## Resources
* Google AI Blog:
[BlazePose - On-device Real-time Body Pose Tracking](https://mediapipe.page.link/blazepose-blog)
* Paper:
[BlazePose: On-device Real-time Body Pose Tracking](https://arxiv.org/abs/2006.10204)
([presentation](https://youtu.be/YPpUOTRn5tA))
* Pose detection model:
[TFLite model](https://github.com/google/mediapipe/tree/master/mediapipe/modules/pose_detection/pose_detection.tflite)
* Upper-body pose landmark model:
[TFLite model](https://github.com/google/mediapipe/tree/master/mediapipe/modules/pose_landmark/pose_landmark_upper_body.tflite)
* [Model card](https://mediapipe.page.link/blazepose-mc)

View File

@ -16,17 +16,18 @@ has_toc: false
<!-- []() in the first cell is needed to preserve table formatting in GitHub Pages. --> <!-- []() in the first cell is needed to preserve table formatting in GitHub Pages. -->
<!-- Whenever this table is updated, paste a copy to ../external_index.md. --> <!-- Whenever this table is updated, paste a copy to ../external_index.md. -->
[]() | Android | iOS | Desktop | Web | Coral []() | Android | iOS | Desktop | Python | Web | Coral
:---------------------------------------------------------------------------- | :-----: | :-: | :-----: | :-: | :---: :---------------------------------------------------------------------------- | :-----: | :-: | :-----: | :----: | :-: | :---:
[Face Detection](https://google.github.io/mediapipe/solutions/face_detection) | ✅ | ✅ | ✅ | ✅ | ✅ [Face Detection](https://google.github.io/mediapipe/solutions/face_detection) | ✅ | ✅ | ✅ | | ✅ | ✅
[Face Mesh](https://google.github.io/mediapipe/solutions/face_mesh) | ✅ | ✅ | ✅ | | [Face Mesh](https://google.github.io/mediapipe/solutions/face_mesh) | ✅ | ✅ | ✅ | | |
[Iris](https://google.github.io/mediapipe/solutions/iris) | ✅ | ✅ | ✅ | ✅ | [Iris](https://google.github.io/mediapipe/solutions/iris) 🆕 | ✅ | ✅ | ✅ | | ✅ |
[Hands](https://google.github.io/mediapipe/solutions/hands) | ✅ | ✅ | ✅ | ✅ | [Hands](https://google.github.io/mediapipe/solutions/hands) | ✅ | ✅ | ✅ | | ✅ |
[Hair Segmentation](https://google.github.io/mediapipe/solutions/hair_segmentation) | ✅ | | ✅ | ✅ | [Pose](https://google.github.io/mediapipe/solutions/pose) 🆕 | ✅ | ✅ | ✅ | ✅ | ✅ |
[Object Detection](https://google.github.io/mediapipe/solutions/object_detection) | ✅ | ✅ | ✅ | | ✅ [Hair Segmentation](https://google.github.io/mediapipe/solutions/hair_segmentation) | ✅ | | ✅ | | ✅ |
[Box Tracking](https://google.github.io/mediapipe/solutions/box_tracking) | ✅ | ✅ | ✅ | | [Object Detection](https://google.github.io/mediapipe/solutions/object_detection) | ✅ | ✅ | ✅ | | | ✅
[Objectron](https://google.github.io/mediapipe/solutions/objectron) | ✅ | | | | [Box Tracking](https://google.github.io/mediapipe/solutions/box_tracking) | ✅ | ✅ | ✅ | | |
[KNIFT](https://google.github.io/mediapipe/solutions/knift) | ✅ | | | | [Objectron](https://google.github.io/mediapipe/solutions/objectron) | ✅ | | | | |
[AutoFlip](https://google.github.io/mediapipe/solutions/autoflip) | | | ✅ | | [KNIFT](https://google.github.io/mediapipe/solutions/knift) | ✅ | | | | |
[MediaSequence](https://google.github.io/mediapipe/solutions/media_sequence) | | | ✅ | | [AutoFlip](https://google.github.io/mediapipe/solutions/autoflip) | | | ✅ | | |
[YouTube 8M](https://google.github.io/mediapipe/solutions/youtube_8m) | | | ✅ | | [MediaSequence](https://google.github.io/mediapipe/solutions/media_sequence) | | | ✅ | | |
[YouTube 8M](https://google.github.io/mediapipe/solutions/youtube_8m) | | | ✅ | | |

View File

@ -2,7 +2,7 @@
layout: default layout: default
title: YouTube-8M Feature Extraction and Model Inference title: YouTube-8M Feature Extraction and Model Inference
parent: Solutions parent: Solutions
nav_order: 12 nav_order: 13
--- ---
# YouTube-8M Feature Extraction and Model Inference # YouTube-8M Feature Extraction and Model Inference

View File

@ -294,7 +294,7 @@ trace_log_margin_usec
in trace log output. This margin allows time for events to be appended to in trace log output. This margin allows time for events to be appended to
the TraceBuffer. the TraceBuffer.
trace_log_duration_events trace_log_instant_events
: False specifies an event for each calculator invocation. True specifies a : False specifies an event for each calculator invocation. True specifies a
separate event for each start and finish time. separate event for each start and finish time.

View File

@ -3,8 +3,11 @@
"/BUILD", "/BUILD",
"mediapipe/BUILD", "mediapipe/BUILD",
"mediapipe/objc/BUILD", "mediapipe/objc/BUILD",
"mediapipe/examples/ios/BUILD", "mediapipe/framework/BUILD",
"mediapipe/examples/ios/edgedetectiongpu/BUILD", "mediapipe/gpu/BUILD",
"mediapipe/objc/testing/app/BUILD",
"mediapipe/examples/ios/common/BUILD",
"mediapipe/examples/ios/helloworld/BUILD",
"mediapipe/examples/ios/facedetectioncpu/BUILD", "mediapipe/examples/ios/facedetectioncpu/BUILD",
"mediapipe/examples/ios/facedetectiongpu/BUILD", "mediapipe/examples/ios/facedetectiongpu/BUILD",
"mediapipe/examples/ios/facemeshgpu/BUILD", "mediapipe/examples/ios/facemeshgpu/BUILD",
@ -13,10 +16,11 @@
"mediapipe/examples/ios/iristrackinggpu/BUILD", "mediapipe/examples/ios/iristrackinggpu/BUILD",
"mediapipe/examples/ios/multihandtrackinggpu/BUILD", "mediapipe/examples/ios/multihandtrackinggpu/BUILD",
"mediapipe/examples/ios/objectdetectioncpu/BUILD", "mediapipe/examples/ios/objectdetectioncpu/BUILD",
"mediapipe/examples/ios/objectdetectiongpu/BUILD" "mediapipe/examples/ios/objectdetectiongpu/BUILD",
"mediapipe/examples/ios/upperbodyposetrackinggpu/BUILD"
], ],
"buildTargets" : [ "buildTargets" : [
"//mediapipe/examples/ios/edgedetectiongpu:EdgeDetectionGpuApp", "//mediapipe/examples/ios/helloworld:HelloWorldApp",
"//mediapipe/examples/ios/facedetectioncpu:FaceDetectionCpuApp", "//mediapipe/examples/ios/facedetectioncpu:FaceDetectionCpuApp",
"//mediapipe/examples/ios/facedetectiongpu:FaceDetectionGpuApp", "//mediapipe/examples/ios/facedetectiongpu:FaceDetectionGpuApp",
"//mediapipe/examples/ios/facemeshgpu:FaceMeshGpuApp", "//mediapipe/examples/ios/facemeshgpu:FaceMeshGpuApp",
@ -26,6 +30,7 @@
"//mediapipe/examples/ios/multihandtrackinggpu:MultiHandTrackingGpuApp", "//mediapipe/examples/ios/multihandtrackinggpu:MultiHandTrackingGpuApp",
"//mediapipe/examples/ios/objectdetectioncpu:ObjectDetectionCpuApp", "//mediapipe/examples/ios/objectdetectioncpu:ObjectDetectionCpuApp",
"//mediapipe/examples/ios/objectdetectiongpu:ObjectDetectionGpuApp", "//mediapipe/examples/ios/objectdetectiongpu:ObjectDetectionGpuApp",
"//mediapipe/examples/ios/upperbodyposetrackinggpu:UpperBodyPoseTrackingGpuApp",
"//mediapipe/objc:mediapipe_framework_ios" "//mediapipe/objc:mediapipe_framework_ios"
], ],
"optionSet" : { "optionSet" : {
@ -80,24 +85,18 @@
"mediapipe/calculators/util", "mediapipe/calculators/util",
"mediapipe/examples", "mediapipe/examples",
"mediapipe/examples/ios", "mediapipe/examples/ios",
"mediapipe/examples/ios/edgedetectiongpu", "mediapipe/examples/ios/common",
"mediapipe/examples/ios/edgedetectiongpu/Base.lproj", "mediapipe/examples/ios/common/Base.lproj",
"mediapipe/examples/ios/helloworld",
"mediapipe/examples/ios/facedetectioncpu", "mediapipe/examples/ios/facedetectioncpu",
"mediapipe/examples/ios/facedetectioncpu/Base.lproj",
"mediapipe/examples/ios/facedetectiongpu", "mediapipe/examples/ios/facedetectiongpu",
"mediapipe/examples/ios/facedetectiongpu/Base.lproj",
"mediapipe/examples/ios/handdetectiongpu", "mediapipe/examples/ios/handdetectiongpu",
"mediapipe/examples/ios/handdetectiongpu/Base.lproj",
"mediapipe/examples/ios/handtrackinggpu", "mediapipe/examples/ios/handtrackinggpu",
"mediapipe/examples/ios/handtrackinggpu/Base.lproj",
"mediapipe/examples/ios/iristrackinggpu", "mediapipe/examples/ios/iristrackinggpu",
"mediapipe/examples/ios/iristrackinggpu/Base.lproj",
"mediapipe/examples/ios/multihandtrackinggpu", "mediapipe/examples/ios/multihandtrackinggpu",
"mediapipe/examples/ios/multihandtrackinggpu/Base.lproj",
"mediapipe/examples/ios/objectdetectioncpu", "mediapipe/examples/ios/objectdetectioncpu",
"mediapipe/examples/ios/objectdetectioncpu/Base.lproj",
"mediapipe/examples/ios/objectdetectiongpu", "mediapipe/examples/ios/objectdetectiongpu",
"mediapipe/examples/ios/objectdetectiongpu/Base.lproj", "mediapipe/examples/ios/upperbodyposetrackinggpu",
"mediapipe/framework", "mediapipe/framework",
"mediapipe/framework/deps", "mediapipe/framework/deps",
"mediapipe/framework/formats", "mediapipe/framework/formats",
@ -113,6 +112,7 @@
"mediapipe/graphs/face_detection", "mediapipe/graphs/face_detection",
"mediapipe/graphs/hand_tracking", "mediapipe/graphs/hand_tracking",
"mediapipe/graphs/object_detection", "mediapipe/graphs/object_detection",
"mediapipe/graphs/pose_tracking",
"mediapipe/models", "mediapipe/models",
"mediapipe/modules", "mediapipe/modules",
"mediapipe/objc", "mediapipe/objc",

View File

@ -11,7 +11,6 @@
"mediapipe", "mediapipe",
"mediapipe/objc", "mediapipe/objc",
"mediapipe/examples/ios", "mediapipe/examples/ios",
"mediapipe/examples/ios/edgedetectiongpu",
"mediapipe/examples/ios/facedetectioncpu", "mediapipe/examples/ios/facedetectioncpu",
"mediapipe/examples/ios/facedetectiongpu", "mediapipe/examples/ios/facedetectiongpu",
"mediapipe/examples/ios/facemeshgpu", "mediapipe/examples/ios/facemeshgpu",
@ -20,7 +19,8 @@
"mediapipe/examples/ios/iristrackinggpu", "mediapipe/examples/ios/iristrackinggpu",
"mediapipe/examples/ios/multihandtrackinggpu", "mediapipe/examples/ios/multihandtrackinggpu",
"mediapipe/examples/ios/objectdetectioncpu", "mediapipe/examples/ios/objectdetectioncpu",
"mediapipe/examples/ios/objectdetectiongpu" "mediapipe/examples/ios/objectdetectiongpu",
"mediapipe/examples/ios/upperbodyposetrackinggpu"
], ],
"projectName" : "Mediapipe", "projectName" : "Mediapipe",
"workspaceRoot" : "../.." "workspaceRoot" : "../.."

View File

@ -1,4 +1,4 @@
"""Copyright 2019 The MediaPipe Authors. """Copyright 2019 - 2020 The MediaPipe Authors.
Licensed under the Apache License, Version 2.0 (the "License"); Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License. you may not use this file except in compliance with the License.
@ -12,3 +12,7 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and See the License for the specific language governing permissions and
limitations under the License. limitations under the License.
""" """
import mediapipe.examples.python as examples
from mediapipe.python import *
import mediapipe.util as util

View File

@ -606,6 +606,35 @@ cc_library(
alwayslink = 1, alwayslink = 1,
) )
cc_library(
name = "packet_presence_calculator",
srcs = ["packet_presence_calculator.cc"],
visibility = ["//visibility:public"],
deps = [
"//mediapipe/framework:calculator_framework",
"//mediapipe/framework:packet",
"//mediapipe/framework:timestamp",
"//mediapipe/framework/port:status",
],
alwayslink = 1,
)
cc_test(
name = "packet_presence_calculator_test",
srcs = ["packet_presence_calculator_test.cc"],
deps = [
":gate_calculator",
":packet_presence_calculator",
"//mediapipe/framework:calculator_framework",
"//mediapipe/framework:calculator_runner",
"//mediapipe/framework:timestamp",
"//mediapipe/framework/port:gtest_main",
"//mediapipe/framework/port:parse_text_proto",
"//mediapipe/framework/port:status",
"//mediapipe/framework/tool:sink",
],
)
cc_library( cc_library(
name = "previous_loopback_calculator", name = "previous_loopback_calculator",
srcs = ["previous_loopback_calculator.cc"], srcs = ["previous_loopback_calculator.cc"],

View File

@ -0,0 +1,84 @@
// Copyright 2020 The MediaPipe Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "mediapipe/framework/calculator_framework.h"
#include "mediapipe/framework/port/status.h"
namespace mediapipe {
// For each non empty input packet, emits a single output packet containing a
// boolean value "true", "false" in response to empty packets (a.k.a. timestamp
// bound updates) This can be used to "flag" the presence of an arbitrary packet
// type as input into a downstream calculator.
//
// Inputs:
// PACKET - any type.
//
// Outputs:
// PRESENCE - bool.
// "true" if packet is not empty, "false" if there's timestamp bound update
// instead.
//
// Examples:
// node: {
// calculator: "PacketPresenceCalculator"
// input_stream: "PACKET:packet"
// output_stream: "PRESENCE:presence"
// }
//
// This calculator can be used in conjuction with GateCalculator in order to
// allow/disallow processing. For instance:
// node: {
// calculator: "PacketPresenceCalculator"
// input_stream: "PACKET:value"
// output_stream: "PRESENCE:disallow_if_present"
// }
// node {
// calculator: "GateCalculator"
// input_stream: "image"
// input_stream: "DISALLOW:disallow_if_present"
// output_stream: "image_for_processing"
// options: {
// [mediapipe.GateCalculatorOptions.ext] {
// empty_packets_as_allow: true
// }
// }
// }
class PacketPresenceCalculator : public CalculatorBase {
public:
static ::mediapipe::Status GetContract(CalculatorContract* cc) {
cc->Inputs().Tag("PACKET").SetAny();
cc->Outputs().Tag("PRESENCE").Set<bool>();
// Process() function is invoked in response to input stream timestamp
// bound updates.
cc->SetProcessTimestampBounds(true);
return ::mediapipe::OkStatus();
}
::mediapipe::Status Open(CalculatorContext* cc) override {
cc->SetOffset(TimestampDiff(0));
return ::mediapipe::OkStatus();
}
::mediapipe::Status Process(CalculatorContext* cc) final {
cc->Outputs()
.Tag("PRESENCE")
.AddPacket(MakePacket<bool>(!cc->Inputs().Tag("PACKET").IsEmpty())
.At(cc->InputTimestamp()));
return ::mediapipe::OkStatus();
}
};
REGISTER_CALCULATOR(PacketPresenceCalculator);
} // namespace mediapipe

View File

@ -0,0 +1,85 @@
// Copyright 2020 The MediaPipe Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include <functional>
#include <string>
#include <vector>
#include "mediapipe/framework/calculator_framework.h"
#include "mediapipe/framework/calculator_runner.h"
#include "mediapipe/framework/port/gmock.h"
#include "mediapipe/framework/port/gtest.h"
#include "mediapipe/framework/port/parse_text_proto.h"
#include "mediapipe/framework/port/status.h"
#include "mediapipe/framework/port/status_matchers.h"
#include "mediapipe/framework/timestamp.h"
#include "mediapipe/framework/tool/sink.h"
namespace mediapipe {
using ::testing::ElementsAre;
using ::testing::Eq;
using ::testing::Value;
namespace {
MATCHER_P2(BoolPacket, value, timestamp, "") {
return Value(arg.template Get<bool>(), Eq(value)) &&
Value(arg.Timestamp(), Eq(timestamp));
}
TEST(PreviousLoopbackCalculator, CorrectTimestamps) {
std::vector<Packet> output_packets;
CalculatorGraphConfig graph_config =
ParseTextProtoOrDie<CalculatorGraphConfig>(R"(
input_stream: 'allow'
input_stream: 'value'
node {
calculator: "GateCalculator"
input_stream: 'value'
input_stream: 'ALLOW:allow'
output_stream: 'gated_value'
}
node {
calculator: 'PacketPresenceCalculator'
input_stream: 'PACKET:gated_value'
output_stream: 'PRESENCE:presence'
}
)");
tool::AddVectorSink("presence", &graph_config, &output_packets);
CalculatorGraph graph;
MP_ASSERT_OK(graph.Initialize(graph_config, {}));
MP_ASSERT_OK(graph.StartRun({}));
auto send_packet = [&graph](int value, bool allow, Timestamp timestamp) {
MP_ASSERT_OK(graph.AddPacketToInputStream(
"value", MakePacket<int>(value).At(timestamp)));
MP_ASSERT_OK(graph.AddPacketToInputStream(
"allow", MakePacket<bool>(allow).At(timestamp)));
};
send_packet(10, false, Timestamp(10));
MP_EXPECT_OK(graph.WaitUntilIdle());
EXPECT_THAT(output_packets, ElementsAre(BoolPacket(false, Timestamp(10))));
output_packets.clear();
send_packet(20, true, Timestamp(11));
MP_EXPECT_OK(graph.WaitUntilIdle());
EXPECT_THAT(output_packets, ElementsAre(BoolPacket(true, Timestamp(11))));
MP_EXPECT_OK(graph.CloseAllInputStreams());
MP_EXPECT_OK(graph.WaitUntilDone());
}
} // namespace
} // namespace mediapipe

View File

@ -201,7 +201,7 @@ int GetXnnpackNumThreads(
// Input tensors are assumed to be of the correct size and already normalized. // Input tensors are assumed to be of the correct size and already normalized.
// All output TfLiteTensors will be destroyed when the graph closes, // All output TfLiteTensors will be destroyed when the graph closes,
// (i.e. after calling graph.WaitUntilDone()). // (i.e. after calling graph.WaitUntilDone()).
// GPU tensors are currently only supported on Android and iOS. // GPU tensor support rquires OpenGL ES 3.1+.
// This calculator uses FixedSizeInputStreamHandler by default. // This calculator uses FixedSizeInputStreamHandler by default.
// //
class TfLiteInferenceCalculator : public CalculatorBase { class TfLiteInferenceCalculator : public CalculatorBase {

View File

@ -20,6 +20,24 @@ package(default_visibility = ["//visibility:public"])
exports_files(["LICENSE"]) exports_files(["LICENSE"])
cc_library(
name = "alignment_points_to_rects_calculator",
srcs = ["alignment_points_to_rects_calculator.cc"],
visibility = ["//visibility:public"],
deps = [
"//mediapipe/calculators/util:detections_to_rects_calculator",
"//mediapipe/calculators/util:detections_to_rects_calculator_cc_proto",
"//mediapipe/framework:calculator_framework",
"//mediapipe/framework:calculator_options_cc_proto",
"//mediapipe/framework/formats:detection_cc_proto",
"//mediapipe/framework/formats:location_data_cc_proto",
"//mediapipe/framework/formats:rect_cc_proto",
"//mediapipe/framework/port:ret_check",
"//mediapipe/framework/port:status",
],
alwayslink = 1,
)
proto_library( proto_library(
name = "annotation_overlay_calculator_proto", name = "annotation_overlay_calculator_proto",
srcs = ["annotation_overlay_calculator.proto"], srcs = ["annotation_overlay_calculator.proto"],
@ -586,6 +604,15 @@ proto_library(
], ],
) )
proto_library(
name = "rect_to_render_scale_calculator_proto",
srcs = ["rect_to_render_scale_calculator.proto"],
visibility = ["//visibility:public"],
deps = [
"//mediapipe/framework:calculator_proto",
],
)
proto_library( proto_library(
name = "detections_to_render_data_calculator_proto", name = "detections_to_render_data_calculator_proto",
srcs = ["detections_to_render_data_calculator.proto"], srcs = ["detections_to_render_data_calculator.proto"],
@ -700,7 +727,15 @@ mediapipe_cc_proto_library(
deps = [":rect_to_render_data_calculator_proto"], deps = [":rect_to_render_data_calculator_proto"],
) )
# TODO: What is that one for? mediapipe_cc_proto_library(
name = "rect_to_render_scale_calculator_cc_proto",
srcs = ["rect_to_render_scale_calculator.proto"],
cc_deps = [
"//mediapipe/framework:calculator_cc_proto",
],
visibility = ["//visibility:public"],
deps = [":rect_to_render_scale_calculator_proto"],
)
mediapipe_cc_proto_library( mediapipe_cc_proto_library(
name = "detections_to_render_data_calculator_cc_proto", name = "detections_to_render_data_calculator_cc_proto",
@ -830,6 +865,19 @@ cc_library(
alwayslink = 1, alwayslink = 1,
) )
cc_library(
name = "rect_to_render_scale_calculator",
srcs = ["rect_to_render_scale_calculator.cc"],
visibility = ["//visibility:public"],
deps = [
":rect_to_render_scale_calculator_cc_proto",
"//mediapipe/framework:calculator_framework",
"//mediapipe/framework/formats:rect_cc_proto",
"//mediapipe/framework/port:ret_check",
],
alwayslink = 1,
)
cc_test( cc_test(
name = "detections_to_render_data_calculator_test", name = "detections_to_render_data_calculator_test",
size = "small", size = "small",

View File

@ -0,0 +1,102 @@
#include <cmath>
#include "mediapipe/calculators/util/detections_to_rects_calculator.h"
#include "mediapipe/calculators/util/detections_to_rects_calculator.pb.h"
#include "mediapipe/framework/calculator_framework.h"
#include "mediapipe/framework/calculator_options.pb.h"
#include "mediapipe/framework/formats/detection.pb.h"
#include "mediapipe/framework/formats/location_data.pb.h"
#include "mediapipe/framework/formats/rect.pb.h"
#include "mediapipe/framework/port/ret_check.h"
#include "mediapipe/framework/port/status.h"
namespace mediapipe {
namespace {} // namespace
// A calculator that converts Detection with two alignment points to Rect.
//
// Detection should contain two points:
// * Center point - center of the crop
// * Scale point - vector from center to scale point defines size and rotation
// of the Rect. Not that Y coordinate of this vector is flipped before
// computing the rotation (it is caused by the fact that Y axis is
// directed downwards). So define target rotation vector accordingly.
//
// Example config:
// node {
// calculator: "AlignmentPointsRectsCalculator"
// input_stream: "DETECTIONS:detections"
// input_stream: "IMAGE_SIZE:image_size"
// output_stream: "NORM_RECT:rect"
// options: {
// [mediapipe.DetectionsToRectsCalculatorOptions.ext] {
// rotation_vector_start_keypoint_index: 0
// rotation_vector_end_keypoint_index: 1
// rotation_vector_target_angle_degrees: 90
// output_zero_rect_for_empty_detections: true
// }
// }
// }
class AlignmentPointsRectsCalculator : public DetectionsToRectsCalculator {
public:
::mediapipe::Status Open(CalculatorContext* cc) override {
RET_CHECK_OK(DetectionsToRectsCalculator::Open(cc));
// Make sure that start and end keypoints are provided.
// They are required for the rect size calculation and will also force base
// calculator to compute rotation.
options_ = cc->Options<DetectionsToRectsCalculatorOptions>();
RET_CHECK(options_.has_rotation_vector_start_keypoint_index())
<< "Start keypoint is required to calculate rect size and rotation";
RET_CHECK(options_.has_rotation_vector_end_keypoint_index())
<< "End keypoint is required to calculate rect size and rotation";
return ::mediapipe::OkStatus();
}
private:
::mediapipe::Status DetectionToNormalizedRect(
const ::mediapipe::Detection& detection,
const DetectionSpec& detection_spec,
::mediapipe::NormalizedRect* rect) override;
};
REGISTER_CALCULATOR(AlignmentPointsRectsCalculator);
::mediapipe::Status AlignmentPointsRectsCalculator::DetectionToNormalizedRect(
const Detection& detection, const DetectionSpec& detection_spec,
NormalizedRect* rect) {
const auto& location_data = detection.location_data();
const auto& image_size = detection_spec.image_size;
RET_CHECK(image_size) << "Image size is required to calculate the rect";
const float x_center =
location_data.relative_keypoints(start_keypoint_index_).x() *
image_size->first;
const float y_center =
location_data.relative_keypoints(start_keypoint_index_).y() *
image_size->second;
const float x_scale =
location_data.relative_keypoints(end_keypoint_index_).x() *
image_size->first;
const float y_scale =
location_data.relative_keypoints(end_keypoint_index_).y() *
image_size->second;
// Bounding box size as double distance from center to scale point.
const float box_size =
std::sqrt((x_scale - x_center) * (x_scale - x_center) +
(y_scale - y_center) * (y_scale - y_center)) *
2.0;
// Set resulting bounding box.
rect->set_x_center(x_center / image_size->first);
rect->set_y_center(y_center / image_size->second);
rect->set_width(box_size / image_size->first);
rect->set_height(box_size / image_size->second);
return ::mediapipe::OkStatus();
}
} // namespace mediapipe

View File

@ -0,0 +1,111 @@
#include "mediapipe/calculators/util/rect_to_render_scale_calculator.pb.h"
#include "mediapipe/framework/calculator_framework.h"
#include "mediapipe/framework/formats/rect.pb.h"
namespace mediapipe {
namespace {
constexpr char kNormRectTag[] = "NORM_RECT";
constexpr char kImageSizeTag[] = "IMAGE_SIZE";
constexpr char kRenderScaleTag[] = "RENDER_SCALE";
} // namespace
// A calculator to get scale for RenderData primitives.
//
// This calculator allows you to make RenderData primitives size (configured via
// `thickness`) to depend on actual size of the object they should highlight
// (e.g. pose, hand or face). It will give you bigger rendered primitives for
// bigger/closer objects and smaller primitives for smaller/far objects.
//
// IMPORTANT NOTE: RenderData primitives are rendered via OpenCV, which accepts
// only integer thickness. So when object goes further/closer you'll see 1 pixel
// jumps.
//
// Check `mediapipe/util/render_data.proto` for details on
// RenderData primitives and `thickness` parameter.
//
// Inputs:
// NORM_RECT: Normalized rectangle to compute object size from as maximum of
// width and height.
// IMAGE_SIZE: A std::pair<int, int> represention of image width and height to
// transform normalized object width and height to absolute pixel
// coordinates.
//
// Outputs:
// RENDER_SCALE: Float value that should be used to scale RenderData
// primitives calculated as `rect_size * multiplier`.
//
// Example config:
// node {
// calculator: "RectToRenderScaleCalculator"
// input_stream: "NORM_RECT:pose_landmarks_rect"
// input_stream: "IMAGE_SIZE:image_size"
// output_stream: "RENDER_SCALE:render_scale"
// options: {
// [mediapipe.RectToRenderScaleCalculatorOptions.ext] {
// multiplier: 0.001
// }
// }
// }
class RectToRenderScaleCalculator : public CalculatorBase {
public:
static ::mediapipe::Status GetContract(CalculatorContract* cc);
::mediapipe::Status Open(CalculatorContext* cc) override;
::mediapipe::Status Process(CalculatorContext* cc) override;
private:
RectToRenderScaleCalculatorOptions options_;
};
REGISTER_CALCULATOR(RectToRenderScaleCalculator);
::mediapipe::Status RectToRenderScaleCalculator::GetContract(
CalculatorContract* cc) {
cc->Inputs().Tag(kNormRectTag).Set<NormalizedRect>();
cc->Inputs().Tag(kImageSizeTag).Set<std::pair<int, int>>();
cc->Outputs().Tag(kRenderScaleTag).Set<float>();
return ::mediapipe::OkStatus();
}
::mediapipe::Status RectToRenderScaleCalculator::Open(CalculatorContext* cc) {
cc->SetOffset(TimestampDiff(0));
options_ = cc->Options<RectToRenderScaleCalculatorOptions>();
return ::mediapipe::OkStatus();
}
::mediapipe::Status RectToRenderScaleCalculator::Process(
CalculatorContext* cc) {
if (cc->Inputs().Tag(kNormRectTag).IsEmpty()) {
cc->Outputs()
.Tag(kRenderScaleTag)
.AddPacket(
MakePacket<float>(options_.multiplier()).At(cc->InputTimestamp()));
return ::mediapipe::OkStatus();
}
// Get image size.
int image_width;
int image_height;
std::tie(image_width, image_height) =
cc->Inputs().Tag(kImageSizeTag).Get<std::pair<int, int>>();
// Get rect size in absolute pixel coordinates.
const auto& rect = cc->Inputs().Tag(kNormRectTag).Get<NormalizedRect>();
const float rect_width = rect.width() * image_width;
const float rect_height = rect.height() * image_height;
// Calculate render scale.
const float rect_size = std::max(rect_width, rect_height);
const float render_scale = rect_size * options_.multiplier();
cc->Outputs()
.Tag(kRenderScaleTag)
.AddPacket(MakePacket<float>(render_scale).At(cc->InputTimestamp()));
return ::mediapipe::OkStatus();
}
} // namespace mediapipe

View File

@ -0,0 +1,18 @@
syntax = "proto2";
package mediapipe;
import "mediapipe/framework/calculator.proto";
message RectToRenderScaleCalculatorOptions {
extend CalculatorOptions {
optional RectToRenderScaleCalculatorOptions ext = 299463409;
}
// Multiplier to apply to the rect size.
// If one defined `thickness` for RenderData primitives for object (e.g. pose,
// hand or face) of size `A` then multiplier should be `1/A`. It means that
// when actual object size on the image will be `B`, than all RenderData
// primitives will be scaled with factor `B/A`.
optional float multiplier = 1 [default = 0.01];
}

View File

@ -197,90 +197,88 @@ REGISTER_CALCULATOR(TrackedDetectionManagerCalculator);
::mediapipe::Status TrackedDetectionManagerCalculator::Process( ::mediapipe::Status TrackedDetectionManagerCalculator::Process(
CalculatorContext* cc) { CalculatorContext* cc) {
if (cc->Inputs().HasTag("TRACKING_BOXES")) { if (cc->Inputs().HasTag(kTrackingBoxesTag) &&
if (!cc->Inputs().Tag("TRACKING_BOXES").IsEmpty()) { !cc->Inputs().Tag(kTrackingBoxesTag).IsEmpty()) {
const TimedBoxProtoList& tracked_boxes = const TimedBoxProtoList& tracked_boxes =
cc->Inputs().Tag("TRACKING_BOXES").Get<TimedBoxProtoList>(); cc->Inputs().Tag(kTrackingBoxesTag).Get<TimedBoxProtoList>();
// Collect all detections that are removed. // Collect all detections that are removed.
auto removed_detection_ids = absl::make_unique<std::vector<int>>(); auto removed_detection_ids = absl::make_unique<std::vector<int>>();
for (const TimedBoxProto& tracked_box : tracked_boxes.box()) { for (const TimedBoxProto& tracked_box : tracked_boxes.box()) {
NormalizedRect bounding_box; NormalizedRect bounding_box;
bounding_box.set_x_center((tracked_box.left() + tracked_box.right()) / bounding_box.set_x_center((tracked_box.left() + tracked_box.right()) /
2.f); 2.f);
bounding_box.set_y_center((tracked_box.bottom() + tracked_box.top()) / bounding_box.set_y_center((tracked_box.bottom() + tracked_box.top()) /
2.f); 2.f);
bounding_box.set_height(tracked_box.bottom() - tracked_box.top()); bounding_box.set_height(tracked_box.bottom() - tracked_box.top());
bounding_box.set_width(tracked_box.right() - tracked_box.left()); bounding_box.set_width(tracked_box.right() - tracked_box.left());
bounding_box.set_rotation(tracked_box.rotation()); bounding_box.set_rotation(tracked_box.rotation());
// First check if this box updates a detection that's waiting for // First check if this box updates a detection that's waiting for
// update from the tracker. // update from the tracker.
auto waiting_for_update_detectoin_ptr = auto waiting_for_update_detectoin_ptr =
waiting_for_update_detections_.find(tracked_box.id()); waiting_for_update_detections_.find(tracked_box.id());
if (waiting_for_update_detectoin_ptr != if (waiting_for_update_detectoin_ptr !=
waiting_for_update_detections_.end()) { waiting_for_update_detections_.end()) {
// Add the detection and remove duplicated detections. // Add the detection and remove duplicated detections.
auto removed_ids = tracked_detection_manager_.AddDetection( auto removed_ids = tracked_detection_manager_.AddDetection(
std::move(waiting_for_update_detectoin_ptr->second)); std::move(waiting_for_update_detectoin_ptr->second));
MoveIds(removed_detection_ids.get(), std::move(removed_ids));
waiting_for_update_detections_.erase(
waiting_for_update_detectoin_ptr);
}
auto removed_ids = tracked_detection_manager_.UpdateDetectionLocation(
tracked_box.id(), bounding_box, tracked_box.time_msec());
MoveIds(removed_detection_ids.get(), std::move(removed_ids)); MoveIds(removed_detection_ids.get(), std::move(removed_ids));
waiting_for_update_detections_.erase(waiting_for_update_detectoin_ptr);
} }
// TODO: Should be handled automatically in detection manager. auto removed_ids = tracked_detection_manager_.UpdateDetectionLocation(
auto removed_ids = tracked_detection_manager_.RemoveObsoleteDetections( tracked_box.id(), bounding_box, tracked_box.time_msec());
GetInputTimestampMs(cc) - kDetectionUpdateTimeOutMS);
MoveIds(removed_detection_ids.get(), std::move(removed_ids)); MoveIds(removed_detection_ids.get(), std::move(removed_ids));
}
// TODO: Should be handled automatically in detection manager.
auto removed_ids = tracked_detection_manager_.RemoveObsoleteDetections(
GetInputTimestampMs(cc) - kDetectionUpdateTimeOutMS);
MoveIds(removed_detection_ids.get(), std::move(removed_ids));
// TODO: Should be handled automatically in detection manager. // TODO: Should be handled automatically in detection manager.
removed_ids = tracked_detection_manager_.RemoveOutOfViewDetections(); removed_ids = tracked_detection_manager_.RemoveOutOfViewDetections();
MoveIds(removed_detection_ids.get(), std::move(removed_ids)); MoveIds(removed_detection_ids.get(), std::move(removed_ids));
if (!removed_detection_ids->empty() && if (!removed_detection_ids->empty() &&
cc->Outputs().HasTag(kCancelObjectIdTag)) { cc->Outputs().HasTag(kCancelObjectIdTag)) {
auto timestamp = cc->InputTimestamp(); auto timestamp = cc->InputTimestamp();
for (int box_id : *removed_detection_ids) { for (int box_id : *removed_detection_ids) {
// The timestamp is incremented (by 1 us) because currently the box // The timestamp is incremented (by 1 us) because currently the box
// tracker calculator only accepts one cancel object ID for any given // tracker calculator only accepts one cancel object ID for any given
// timestamp. // timestamp.
cc->Outputs()
.Tag(kCancelObjectIdTag)
.AddPacket(mediapipe::MakePacket<int>(box_id).At(timestamp++));
}
}
// Output detections and corresponding bounding boxes.
const auto& all_detections =
tracked_detection_manager_.GetAllTrackedDetections();
auto output_detections = absl::make_unique<std::vector<Detection>>();
auto output_boxes = absl::make_unique<std::vector<NormalizedRect>>();
for (const auto& detection_ptr : all_detections) {
const auto& detection = *detection_ptr.second;
// Only output detections that are synced.
if (detection.last_updated_timestamp() <
cc->InputTimestamp().Microseconds() / 1000) {
continue;
}
output_detections->emplace_back(
GetAxisAlignedDetectionFromTrackedDetection(detection));
output_boxes->emplace_back(detection.bounding_box());
}
if (cc->Outputs().HasTag(kDetectionsTag)) {
cc->Outputs() cc->Outputs()
.Tag(kDetectionsTag) .Tag(kCancelObjectIdTag)
.Add(output_detections.release(), cc->InputTimestamp()); .AddPacket(mediapipe::MakePacket<int>(box_id).At(timestamp++));
} }
}
if (cc->Outputs().HasTag(kDetectionBoxesTag)) { // Output detections and corresponding bounding boxes.
cc->Outputs() const auto& all_detections =
.Tag(kDetectionBoxesTag) tracked_detection_manager_.GetAllTrackedDetections();
.Add(output_boxes.release(), cc->InputTimestamp()); auto output_detections = absl::make_unique<std::vector<Detection>>();
auto output_boxes = absl::make_unique<std::vector<NormalizedRect>>();
for (const auto& detection_ptr : all_detections) {
const auto& detection = *detection_ptr.second;
// Only output detections that are synced.
if (detection.last_updated_timestamp() <
cc->InputTimestamp().Microseconds() / 1000) {
continue;
} }
output_detections->emplace_back(
GetAxisAlignedDetectionFromTrackedDetection(detection));
output_boxes->emplace_back(detection.bounding_box());
}
if (cc->Outputs().HasTag(kDetectionsTag)) {
cc->Outputs()
.Tag(kDetectionsTag)
.Add(output_detections.release(), cc->InputTimestamp());
}
if (cc->Outputs().HasTag(kDetectionBoxesTag)) {
cc->Outputs()
.Tag(kDetectionBoxesTag)
.Add(output_boxes.release(), cc->InputTimestamp());
} }
} }

View File

@ -0,0 +1,62 @@
# Copyright 2019 The MediaPipe Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
licenses(["notice"]) # Apache 2.0
package(default_visibility = ["//visibility:private"])
cc_binary(
name = "libmediapipe_jni.so",
linkshared = 1,
linkstatic = 1,
deps = [
"//mediapipe/graphs/pose_tracking:upper_body_pose_tracking_gpu_deps",
"//mediapipe/java/com/google/mediapipe/framework/jni:mediapipe_framework_jni",
],
)
cc_library(
name = "mediapipe_jni_lib",
srcs = [":libmediapipe_jni.so"],
alwayslink = 1,
)
android_binary(
name = "upperbodyposetrackinggpu",
srcs = glob(["*.java"]),
assets = [
"//mediapipe/graphs/pose_tracking:upper_body_pose_tracking_gpu.binarypb",
"//mediapipe/modules/pose_landmark:pose_landmark_upper_body.tflite",
"//mediapipe/modules/pose_detection:pose_detection.tflite",
],
assets_dir = "",
manifest = "//mediapipe/examples/android/src/java/com/google/mediapipe/apps/basic:AndroidManifest.xml",
manifest_values = {
"applicationId": "com.google.mediapipe.apps.upperbodyposetrackinggpu",
"appName": "Upper Body Pose Tracking",
"mainActivity": ".MainActivity",
"cameraFacingFront": "False",
"binaryGraphName": "upper_body_pose_tracking_gpu.binarypb",
"inputVideoStreamName": "input_video",
"outputVideoStreamName": "output_video",
"flipFramesVertically": "True",
},
multidex = "native",
deps = [
":mediapipe_jni_lib",
"//mediapipe/examples/android/src/java/com/google/mediapipe/apps/basic:basic_lib",
"//mediapipe/framework/formats:landmark_java_proto_lite",
"//mediapipe/java/com/google/mediapipe/framework:android_framework",
],
)

View File

@ -0,0 +1,75 @@
// Copyright 2019 The MediaPipe Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.mediapipe.apps.upperbodyposetrackinggpu;
import android.os.Bundle;
import android.util.Log;
import com.google.mediapipe.formats.proto.LandmarkProto.NormalizedLandmark;
import com.google.mediapipe.formats.proto.LandmarkProto.NormalizedLandmarkList;
import com.google.mediapipe.framework.PacketGetter;
import com.google.protobuf.InvalidProtocolBufferException;
/** Main activity of MediaPipe upper-body pose tracking app. */
public class MainActivity extends com.google.mediapipe.apps.basic.MainActivity {
private static final String TAG = "MainActivity";
private static final String OUTPUT_LANDMARKS_STREAM_NAME = "pose_landmarks";
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
// To show verbose logging, run:
// adb shell setprop log.tag.MainActivity VERBOSE
if (Log.isLoggable(TAG, Log.VERBOSE)) {
processor.addPacketCallback(
OUTPUT_LANDMARKS_STREAM_NAME,
(packet) -> {
Log.v(TAG, "Received pose landmarks packet.");
try {
NormalizedLandmarkList poseLandmarks =
PacketGetter.getProto(packet, NormalizedLandmarkList.class);
Log.v(
TAG,
"[TS:"
+ packet.getTimestamp()
+ "] "
+ getPoseLandmarksDebugString(poseLandmarks));
} catch (InvalidProtocolBufferException exception) {
Log.e(TAG, "Failed to get proto.", exception);
}
});
}
}
private static String getPoseLandmarksDebugString(NormalizedLandmarkList poseLandmarks) {
String poseLandmarkStr = "Pose landmarks: " + poseLandmarks.getLandmarkCount() + "\n";
int landmarkIndex = 0;
for (NormalizedLandmark landmark : poseLandmarks.getLandmarkList()) {
poseLandmarkStr +=
"\tLandmark ["
+ landmarkIndex
+ "]: ("
+ landmark.getX()
+ ", "
+ landmark.getY()
+ ", "
+ landmark.getZ()
+ ")\n";
++landmarkIndex;
}
return poseLandmarkStr;
}
}

View File

@ -0,0 +1,34 @@
# Copyright 2020 The MediaPipe Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
licenses(["notice"]) # Apache 2.0
package(default_visibility = ["//mediapipe/examples:__subpackages__"])
cc_binary(
name = "upper_body_pose_tracking_cpu",
deps = [
"//mediapipe/examples/desktop:demo_run_graph_main",
"//mediapipe/graphs/pose_tracking:upper_body_pose_tracking_cpu_deps",
],
)
# Linux only
cc_binary(
name = "upper_body_pose_tracking_gpu",
deps = [
"//mediapipe/examples/desktop:demo_run_graph_main_gpu",
"//mediapipe/graphs/pose_tracking:upper_body_pose_tracking_gpu_deps",
],
)

View File

@ -14,6 +14,8 @@
#import "AppDelegate.h" #import "AppDelegate.h"
#import "mediapipe/examples/ios/common/CommonViewController.h"
@interface AppDelegate () @interface AppDelegate ()
@end @end

Binary file not shown.

After

Width:  |  Height:  |  Size: 396 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 686 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 855 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 KiB

View File

@ -23,21 +23,25 @@
{ {
"idiom" : "iphone", "idiom" : "iphone",
"size" : "40x40", "size" : "40x40",
"filename" : "40_c_2x.png",
"scale" : "2x" "scale" : "2x"
}, },
{ {
"idiom" : "iphone", "idiom" : "iphone",
"size" : "40x40", "size" : "40x40",
"filename" : "40_c_3x.png",
"scale" : "3x" "scale" : "3x"
}, },
{ {
"idiom" : "iphone", "idiom" : "iphone",
"size" : "60x60", "size" : "60x60",
"filename" : "60_c_iphone_2x.png",
"scale" : "2x" "scale" : "2x"
}, },
{ {
"idiom" : "iphone", "idiom" : "iphone",
"size" : "60x60", "size" : "60x60",
"filename" : "60_c_iphone_3x.png",
"scale" : "3x" "scale" : "3x"
}, },
{ {
@ -63,6 +67,7 @@
{ {
"idiom" : "ipad", "idiom" : "ipad",
"size" : "40x40", "size" : "40x40",
"filename" : "40_c_1x.png",
"scale" : "1x" "scale" : "1x"
}, },
{ {
@ -73,6 +78,7 @@
{ {
"idiom" : "ipad", "idiom" : "ipad",
"size" : "76x76", "size" : "76x76",
"filename" : "76_c_Ipad.png",
"scale" : "1x" "scale" : "1x"
}, },
{ {

View File

@ -0,0 +1,52 @@
# Copyright 2019 The MediaPipe Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
licenses(["notice"]) # Apache 2.0
objc_library(
name = "CommonMediaPipeAppLibrary",
srcs = [
"AppDelegate.mm",
"CommonViewController.mm",
"main.m",
],
hdrs = [
"AppDelegate.h",
"CommonViewController.h",
],
data = [
"Base.lproj/LaunchScreen.storyboard",
"Base.lproj/Main.storyboard",
],
sdk_frameworks = [
"AVFoundation",
"CoreGraphics",
"CoreMedia",
"UIKit",
],
visibility = ["//mediapipe:__subpackages__"],
deps = [
"//mediapipe/objc:mediapipe_framework_ios",
"//mediapipe/objc:mediapipe_input_sources_ios",
"//mediapipe/objc:mediapipe_layer_renderer",
],
)
exports_files(["Info.plist"])
filegroup(
name = "AppIcon",
srcs = glob(["Assets.xcassets/AppIcon.appiconset/*"]),
visibility = ["//mediapipe:__subpackages__"],
)

View File

@ -1,5 +1,5 @@
<?xml version="1.0" encoding="UTF-8"?> <?xml version="1.0" encoding="UTF-8"?>
<document type="com.apple.InterfaceBuilder3.CocoaTouch.Storyboard.XIB" version="3.0" toolsVersion="16097.2" targetRuntime="iOS.CocoaTouch" propertyAccessControl="none" useAutolayout="YES" useTraitCollections="YES" useSafeAreas="YES" colorMatched="YES" initialViewController="BYZ-38-t0r"> <document type="com.apple.InterfaceBuilder3.CocoaTouch.Storyboard.XIB" version="3.0" toolsVersion="16097" targetRuntime="iOS.CocoaTouch" propertyAccessControl="none" useAutolayout="YES" useTraitCollections="YES" useSafeAreas="YES" colorMatched="YES" initialViewController="BYZ-38-t0r">
<device id="retina4_7" orientation="portrait" appearance="light"/> <device id="retina4_7" orientation="portrait" appearance="light"/>
<dependencies> <dependencies>
<plugIn identifier="com.apple.InterfaceBuilder.IBCocoaTouchPlugin" version="16087"/> <plugIn identifier="com.apple.InterfaceBuilder.IBCocoaTouchPlugin" version="16087"/>
@ -7,10 +7,10 @@
<capability name="documents saved in the Xcode 8 format" minToolsVersion="8.0"/> <capability name="documents saved in the Xcode 8 format" minToolsVersion="8.0"/>
</dependencies> </dependencies>
<scenes> <scenes>
<!--View Controller--> <!--Common View Controller-->
<scene sceneID="tne-QT-ifu"> <scene sceneID="tne-QT-ifu">
<objects> <objects>
<viewController id="BYZ-38-t0r" customClass="ViewController" sceneMemberID="viewController"> <viewController id="BYZ-38-t0r" customClass="CommonViewController" sceneMemberID="viewController">
<view key="view" contentMode="scaleToFill" id="8bC-Xf-vdC"> <view key="view" contentMode="scaleToFill" id="8bC-Xf-vdC">
<rect key="frame" x="0.0" y="0.0" width="375" height="667"/> <rect key="frame" x="0.0" y="0.0" width="375" height="667"/>
<autoresizingMask key="autoresizingMask" widthSizable="YES" heightSizable="YES"/> <autoresizingMask key="autoresizingMask" widthSizable="YES" heightSizable="YES"/>
@ -37,8 +37,8 @@
<viewLayoutGuide key="safeArea" id="6Tk-OE-BBY"/> <viewLayoutGuide key="safeArea" id="6Tk-OE-BBY"/>
</view> </view>
<connections> <connections>
<outlet property="_liveView" destination="EfB-xq-knP" id="JQp-2n-q9q"/> <outlet property="liveView" destination="8bC-Xf-vdC" id="3qM-tM-inb"/>
<outlet property="_noCameraLabel" destination="emf-N5-sEd" id="91G-3Z-cU3"/> <outlet property="noCameraLabel" destination="emf-N5-sEd" id="TUU-KL-fTU"/>
</connections> </connections>
</viewController> </viewController>
<placeholder placeholderIdentifier="IBFirstResponder" id="dkx-z0-nzr" sceneMemberID="firstResponder"/> <placeholder placeholderIdentifier="IBFirstResponder" id="dkx-z0-nzr" sceneMemberID="firstResponder"/>

View File

@ -0,0 +1,63 @@
// Copyright 2019 The MediaPipe Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#import <UIKit/UIKit.h>
#import "mediapipe/objc/MPPCameraInputSource.h"
#import "mediapipe/objc/MPPGraph.h"
#import "mediapipe/objc/MPPLayerRenderer.h"
#import "mediapipe/objc/MPPPlayerInputSource.h"
typedef NS_ENUM(NSInteger, MediaPipeDemoSourceMode) {
MediaPipeDemoSourceCamera,
MediaPipeDemoSourceVideo
};
@interface CommonViewController : UIViewController <MPPGraphDelegate, MPPInputSourceDelegate>
// The MediaPipe graph currently in use. Initialized in viewDidLoad, started in
// viewWillAppear: and sent video frames on videoQueue.
@property(nonatomic) MPPGraph* mediapipeGraph;
// Handles camera access via AVCaptureSession library.
@property(nonatomic) MPPCameraInputSource* cameraSource;
// Provides data from a video.
@property(nonatomic) MPPPlayerInputSource* videoSource;
// The data source for the demo.
@property(nonatomic) MediaPipeDemoSourceMode sourceMode;
// Inform the user when camera is unavailable.
@property(nonatomic) IBOutlet UILabel* noCameraLabel;
// Display the camera preview frames.
@property(strong, nonatomic) IBOutlet UIView* liveView;
// Render frames in a layer.
@property(nonatomic) MPPLayerRenderer* renderer;
// Process camera frames on this queue.
@property(nonatomic) dispatch_queue_t videoQueue;
// Graph name.
@property(nonatomic) NSString* graphName;
// Graph input stream.
@property(nonatomic) const char* graphInputStream;
// Graph output stream.
@property(nonatomic) const char* graphOutputStream;
@end

View File

@ -12,46 +12,24 @@
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
#import "ViewController.h" #import "CommonViewController.h"
#import "mediapipe/objc/MPPGraph.h"
#import "mediapipe/objc/MPPCameraInputSource.h"
#import "mediapipe/objc/MPPLayerRenderer.h"
#import "mediapipe/objc/MPPPlayerInputSource.h"
static NSString* const kGraphName = @"mobile_gpu";
static const char* kInputStream = "input_video";
static const char* kOutputStream = "output_video";
static const char* kVideoQueueLabel = "com.google.mediapipe.example.videoQueue"; static const char* kVideoQueueLabel = "com.google.mediapipe.example.videoQueue";
@interface ViewController () <MPPGraphDelegate, MPPInputSourceDelegate> @implementation CommonViewController
// The MediaPipe graph currently in use. Initialized in viewDidLoad, started in viewWillAppear: and // This provides a hook to replace the basic ViewController with a subclass when it's created from a
// sent video frames on _videoQueue. // storyboard, without having to change the storyboard itself.
@property(nonatomic) MPPGraph* mediapipeGraph; + (instancetype)allocWithZone:(struct _NSZone*)zone {
NSString* subclassName = [[NSBundle mainBundle] objectForInfoDictionaryKey:@"MainViewController"];
@end if (subclassName.length > 0) {
Class customClass = NSClassFromString(subclassName);
@implementation ViewController { Class baseClass = [CommonViewController class];
/// Handles camera access via AVCaptureSession library. NSAssert([customClass isSubclassOfClass:baseClass], @"%@ must be a subclass of %@", customClass,
MPPCameraInputSource* _cameraSource; baseClass);
MPPPlayerInputSource* _videoSource; if (self == baseClass) return [customClass allocWithZone:zone];
MediaPipeDemoSourceMode _sourceMode; }
return [super allocWithZone:zone];
/// Inform the user when camera is unavailable.
IBOutlet UILabel* _noCameraLabel;
/// Display the camera preview frames.
IBOutlet UIView* _liveView;
/// Render frames in a layer.
MPPLayerRenderer* _renderer;
/// Process camera frames on this queue.
dispatch_queue_t _videoQueue;
}
- (void)setSourceMode:(MediaPipeDemoSourceMode)mode {
_sourceMode = mode;
} }
#pragma mark - Cleanup methods #pragma mark - Cleanup methods
@ -86,7 +64,6 @@ static const char* kVideoQueueLabel = "com.google.mediapipe.example.videoQueue";
// Create MediaPipe graph with mediapipe::CalculatorGraphConfig proto object. // Create MediaPipe graph with mediapipe::CalculatorGraphConfig proto object.
MPPGraph* newGraph = [[MPPGraph alloc] initWithGraphConfig:config]; MPPGraph* newGraph = [[MPPGraph alloc] initWithGraphConfig:config];
[newGraph addFrameOutputStream:kOutputStream outputPacketType:MPPPacketTypePixelBuffer];
return newGraph; return newGraph;
} }
@ -95,19 +72,26 @@ static const char* kVideoQueueLabel = "com.google.mediapipe.example.videoQueue";
- (void)viewDidLoad { - (void)viewDidLoad {
[super viewDidLoad]; [super viewDidLoad];
_renderer = [[MPPLayerRenderer alloc] init]; self.renderer = [[MPPLayerRenderer alloc] init];
_renderer.layer.frame = _liveView.layer.bounds; self.renderer.layer.frame = self.liveView.layer.bounds;
[_liveView.layer addSublayer:_renderer.layer]; [self.liveView.layer addSublayer:self.renderer.layer];
_renderer.frameScaleMode = MPPFrameScaleModeFillAndCrop; self.renderer.frameScaleMode = MPPFrameScaleModeFillAndCrop;
dispatch_queue_attr_t qosAttribute = dispatch_queue_attr_make_with_qos_class( dispatch_queue_attr_t qosAttribute = dispatch_queue_attr_make_with_qos_class(
DISPATCH_QUEUE_SERIAL, QOS_CLASS_USER_INTERACTIVE, /*relative_priority=*/0); DISPATCH_QUEUE_SERIAL, QOS_CLASS_USER_INTERACTIVE, /*relative_priority=*/0);
_videoQueue = dispatch_queue_create(kVideoQueueLabel, qosAttribute); self.videoQueue = dispatch_queue_create(kVideoQueueLabel, qosAttribute);
self.graphName = [[NSBundle mainBundle] objectForInfoDictionaryKey:@"GraphName"];
self.graphInputStream =
[[[NSBundle mainBundle] objectForInfoDictionaryKey:@"GraphInputStream"] UTF8String];
self.graphOutputStream =
[[[NSBundle mainBundle] objectForInfoDictionaryKey:@"GraphOutputStream"] UTF8String];
self.mediapipeGraph = [[self class] loadGraphFromResource:self.graphName];
[self.mediapipeGraph addFrameOutputStream:self.graphOutputStream
outputPacketType:MPPPacketTypePixelBuffer];
self.mediapipeGraph = [[self class] loadGraphFromResource:kGraphName];
self.mediapipeGraph.delegate = self; self.mediapipeGraph.delegate = self;
// Set maxFramesInFlight to a small value to avoid memory contention for real-time processing.
self.mediapipeGraph.maxFramesInFlight = 2;
} }
// In this application, there is only one ViewController which has no navigation to other view // In this application, there is only one ViewController which has no navigation to other view
@ -119,43 +103,77 @@ static const char* kVideoQueueLabel = "com.google.mediapipe.example.videoQueue";
- (void)viewWillAppear:(BOOL)animated { - (void)viewWillAppear:(BOOL)animated {
[super viewWillAppear:animated]; [super viewWillAppear:animated];
switch (self.sourceMode) {
case MediaPipeDemoSourceVideo: {
NSString* videoName = [[NSBundle mainBundle] objectForInfoDictionaryKey:@"VideoName"];
AVAsset* video = [AVAsset assetWithURL:[[NSBundle mainBundle] URLForResource:videoName
withExtension:@"mov"]];
self.videoSource = [[MPPPlayerInputSource alloc] initWithAVAsset:video];
[self.videoSource setDelegate:self queue:self.videoQueue];
dispatch_async(self.videoQueue, ^{
[self.videoSource start];
});
break;
}
case MediaPipeDemoSourceCamera: {
self.cameraSource = [[MPPCameraInputSource alloc] init];
[self.cameraSource setDelegate:self queue:self.videoQueue];
self.cameraSource.sessionPreset = AVCaptureSessionPresetHigh;
NSString* cameraPosition =
[[NSBundle mainBundle] objectForInfoDictionaryKey:@"CameraPosition"];
if (cameraPosition.length > 0 && [cameraPosition isEqualToString:@"back"]) {
self.cameraSource.cameraPosition = AVCaptureDevicePositionBack;
} else {
self.cameraSource.cameraPosition = AVCaptureDevicePositionFront;
// When using the front camera, mirror the input for a more natural look.
_cameraSource.videoMirrored = YES;
}
// The frame's native format is rotated with respect to the portrait orientation.
_cameraSource.orientation = AVCaptureVideoOrientationPortrait;
[self.cameraSource requestCameraAccessWithCompletionHandler:^void(BOOL granted) {
if (granted) {
[self startGraphAndCamera];
dispatch_async(dispatch_get_main_queue(), ^{
self.noCameraLabel.hidden = YES;
});
}
}];
break;
}
}
}
- (void)startGraphAndCamera {
// Start running self.mediapipeGraph. // Start running self.mediapipeGraph.
NSError* error; NSError* error;
if (![self.mediapipeGraph startWithError:&error]) { if (![self.mediapipeGraph startWithError:&error]) {
NSLog(@"Failed to start graph: %@", error); NSLog(@"Failed to start graph: %@", error);
} }
switch (_sourceMode) { // Start fetching frames from the camera.
case MediaPipeDemoSourceVideo: { dispatch_async(self.videoQueue, ^{
AVAsset* video = [self.cameraSource start];
[AVAsset assetWithURL:[[NSBundle mainBundle] URLForResource:@"object_detection" });
withExtension:@"mov"]]; }
_videoSource = [[MPPPlayerInputSource alloc] initWithAVAsset:video];
[_videoSource setDelegate:self queue:_videoQueue]; #pragma mark - MPPInputSourceDelegate methods
dispatch_async(_videoQueue, ^{
[_videoSource start]; // Must be invoked on self.videoQueue.
}); - (void)processVideoFrame:(CVPixelBufferRef)imageBuffer
break; timestamp:(CMTime)timestamp
} fromSource:(MPPInputSource*)source {
case MediaPipeDemoSourceBackCamera: if (source != self.cameraSource && source != self.videoSource) {
_cameraSource = [[MPPCameraInputSource alloc] init]; NSLog(@"Unknown source: %@", source);
[_cameraSource setDelegate:self queue:_videoQueue]; return;
_cameraSource.sessionPreset = AVCaptureSessionPresetHigh;
_cameraSource.cameraPosition = AVCaptureDevicePositionBack;
// The frame's native format is rotated with respect to the portrait orientation.
_cameraSource.orientation = AVCaptureVideoOrientationPortrait;
[_cameraSource requestCameraAccessWithCompletionHandler:^void(BOOL granted) {
if (granted) {
dispatch_async(_videoQueue, ^{
[_cameraSource start];
});
dispatch_async(dispatch_get_main_queue(), ^{
_noCameraLabel.hidden = YES;
});
}
}];
break;
} }
[self.mediapipeGraph sendPixelBuffer:imageBuffer
intoStream:self.graphInputStream
packetType:MPPPacketTypePixelBuffer];
} }
#pragma mark - MPPGraphDelegate methods #pragma mark - MPPGraphDelegate methods
@ -164,29 +182,14 @@ static const char* kVideoQueueLabel = "com.google.mediapipe.example.videoQueue";
- (void)mediapipeGraph:(MPPGraph*)graph - (void)mediapipeGraph:(MPPGraph*)graph
didOutputPixelBuffer:(CVPixelBufferRef)pixelBuffer didOutputPixelBuffer:(CVPixelBufferRef)pixelBuffer
fromStream:(const std::string&)streamName { fromStream:(const std::string&)streamName {
if (streamName == kOutputStream) { if (streamName == self.graphOutputStream) {
// Display the captured image on the screen. // Display the captured image on the screen.
CVPixelBufferRetain(pixelBuffer); CVPixelBufferRetain(pixelBuffer);
dispatch_async(dispatch_get_main_queue(), ^{ dispatch_async(dispatch_get_main_queue(), ^{
[_renderer renderPixelBuffer:pixelBuffer]; [self.renderer renderPixelBuffer:pixelBuffer];
CVPixelBufferRelease(pixelBuffer); CVPixelBufferRelease(pixelBuffer);
}); });
} }
} }
#pragma mark - MPPInputSourceDelegate methods
// Must be invoked on _videoQueue.
- (void)processVideoFrame:(CVPixelBufferRef)imageBuffer
timestamp:(CMTime)timestamp
fromSource:(MPPInputSource*)source {
if (source != _cameraSource && source != _videoSource) {
NSLog(@"Unknown source: %@", source);
return;
}
[self.mediapipeGraph sendPixelBuffer:imageBuffer
intoStream:kInputStream
packetType:MPPPacketTypePixelBuffer];
}
@end @end

View File

@ -37,7 +37,6 @@
<key>UISupportedInterfaceOrientations~ipad</key> <key>UISupportedInterfaceOrientations~ipad</key>
<array> <array>
<string>UIInterfaceOrientationPortrait</string> <string>UIInterfaceOrientationPortrait</string>
<string>UIInterfaceOrientationPortraitUpsideDown</string>
</array> </array>
</dict> </dict>
</plist> </plist>

View File

@ -1,59 +0,0 @@
// Copyright 2019 The MediaPipe Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#import "AppDelegate.h"
@interface AppDelegate ()
@end
@implementation AppDelegate
- (BOOL)application:(UIApplication *)application
didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
// Override point for customization after application launch.
return YES;
}
- (void)applicationWillResignActive:(UIApplication *)application {
// Sent when the application is about to move from active to inactive state. This can occur for
// certain types of temporary interruptions (such as an incoming phone call or SMS message) or
// when the user quits the application and it begins the transition to the background state. Use
// this method to pause ongoing tasks, disable timers, and invalidate graphics rendering
// callbacks. Games should use this method to pause the game.
}
- (void)applicationDidEnterBackground:(UIApplication *)application {
// Use this method to release shared resources, save user data, invalidate timers, and store
// enough application state information to restore your application to its current state in case
// it is terminated later. If your application supports background execution, this method is
// called instead of applicationWillTerminate: when the user quits.
}
- (void)applicationWillEnterForeground:(UIApplication *)application {
// Called as part of the transition from the background to the active state; here you can undo
// many of the changes made on entering the background.
}
- (void)applicationDidBecomeActive:(UIApplication *)application {
// Restart any tasks that were paused (or not yet started) while the application was inactive. If
// the application was previously in the background, optionally refresh the user interface.
}
- (void)applicationWillTerminate:(UIApplication *)application {
// Called when the application is about to terminate. Save data if appropriate. See also
// applicationDidEnterBackground:.
}
@end

View File

@ -1,99 +0,0 @@
{
"images" : [
{
"idiom" : "iphone",
"size" : "20x20",
"scale" : "2x"
},
{
"idiom" : "iphone",
"size" : "20x20",
"scale" : "3x"
},
{
"idiom" : "iphone",
"size" : "29x29",
"scale" : "2x"
},
{
"idiom" : "iphone",
"size" : "29x29",
"scale" : "3x"
},
{
"idiom" : "iphone",
"size" : "40x40",
"scale" : "2x"
},
{
"idiom" : "iphone",
"size" : "40x40",
"scale" : "3x"
},
{
"idiom" : "iphone",
"size" : "60x60",
"scale" : "2x"
},
{
"idiom" : "iphone",
"size" : "60x60",
"scale" : "3x"
},
{
"idiom" : "ipad",
"size" : "20x20",
"scale" : "1x"
},
{
"idiom" : "ipad",
"size" : "20x20",
"scale" : "2x"
},
{
"idiom" : "ipad",
"size" : "29x29",
"scale" : "1x"
},
{
"idiom" : "ipad",
"size" : "29x29",
"scale" : "2x"
},
{
"idiom" : "ipad",
"size" : "40x40",
"scale" : "1x"
},
{
"idiom" : "ipad",
"size" : "40x40",
"scale" : "2x"
},
{
"idiom" : "ipad",
"size" : "76x76",
"scale" : "1x"
},
{
"idiom" : "ipad",
"size" : "76x76",
"scale" : "2x"
},
{
"idiom" : "ipad",
"size" : "83.5x83.5",
"scale" : "2x"
},
{
"idiom" : "ios-marketing",
"size" : "1024x1024",
"scale" : "1x"
}
],
"info" : {
"version" : 1,
"author" : "xcode"
}
}

View File

@ -1,19 +0,0 @@
// Copyright 2019 The MediaPipe Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#import <UIKit/UIKit.h>
@interface ViewController : UIViewController
@end

View File

@ -1,176 +0,0 @@
// Copyright 2019 The MediaPipe Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#import "ViewController.h"
#import "mediapipe/objc/MPPGraph.h"
#import "mediapipe/objc/MPPCameraInputSource.h"
#import "mediapipe/objc/MPPLayerRenderer.h"
static NSString* const kGraphName = @"mobile_gpu";
static const char* kInputStream = "input_video";
static const char* kOutputStream = "output_video";
static const char* kVideoQueueLabel = "com.google.mediapipe.example.videoQueue";
@interface ViewController () <MPPGraphDelegate, MPPInputSourceDelegate>
// The MediaPipe graph currently in use. Initialized in viewDidLoad, started in viewWillAppear: and
// sent video frames on _videoQueue.
@property(nonatomic) MPPGraph* mediapipeGraph;
@end
@implementation ViewController {
/// Handles camera access via AVCaptureSession library.
MPPCameraInputSource* _cameraSource;
/// Inform the user when camera is unavailable.
IBOutlet UILabel* _noCameraLabel;
/// Display the camera preview frames.
IBOutlet UIView* _liveView;
/// Render frames in a layer.
MPPLayerRenderer* _renderer;
/// Process camera frames on this queue.
dispatch_queue_t _videoQueue;
}
#pragma mark - Cleanup methods
- (void)dealloc {
self.mediapipeGraph.delegate = nil;
[self.mediapipeGraph cancel];
// Ignore errors since we're cleaning up.
[self.mediapipeGraph closeAllInputStreamsWithError:nil];
[self.mediapipeGraph waitUntilDoneWithError:nil];
}
#pragma mark - MediaPipe graph methods
+ (MPPGraph*)loadGraphFromResource:(NSString*)resource {
// Load the graph config resource.
NSError* configLoadError = nil;
NSBundle* bundle = [NSBundle bundleForClass:[self class]];
if (!resource || resource.length == 0) {
return nil;
}
NSURL* graphURL = [bundle URLForResource:resource withExtension:@"binarypb"];
NSData* data = [NSData dataWithContentsOfURL:graphURL options:0 error:&configLoadError];
if (!data) {
NSLog(@"Failed to load MediaPipe graph config: %@", configLoadError);
return nil;
}
// Parse the graph config resource into mediapipe::CalculatorGraphConfig proto object.
mediapipe::CalculatorGraphConfig config;
config.ParseFromArray(data.bytes, data.length);
// Create MediaPipe graph with mediapipe::CalculatorGraphConfig proto object.
MPPGraph* newGraph = [[MPPGraph alloc] initWithGraphConfig:config];
[newGraph addFrameOutputStream:kOutputStream outputPacketType:MPPPacketTypePixelBuffer];
return newGraph;
}
#pragma mark - UIViewController methods
- (void)viewDidLoad {
[super viewDidLoad];
_renderer = [[MPPLayerRenderer alloc] init];
_renderer.layer.frame = _liveView.layer.bounds;
[_liveView.layer addSublayer:_renderer.layer];
_renderer.frameScaleMode = MPPFrameScaleModeFillAndCrop;
dispatch_queue_attr_t qosAttribute = dispatch_queue_attr_make_with_qos_class(
DISPATCH_QUEUE_SERIAL, QOS_CLASS_USER_INTERACTIVE, /*relative_priority=*/0);
_videoQueue = dispatch_queue_create(kVideoQueueLabel, qosAttribute);
_cameraSource = [[MPPCameraInputSource alloc] init];
[_cameraSource setDelegate:self queue:_videoQueue];
_cameraSource.sessionPreset = AVCaptureSessionPresetHigh;
_cameraSource.cameraPosition = AVCaptureDevicePositionBack;
// The frame's native format is rotated with respect to the portrait orientation.
_cameraSource.orientation = AVCaptureVideoOrientationPortrait;
self.mediapipeGraph = [[self class] loadGraphFromResource:kGraphName];
self.mediapipeGraph.delegate = self;
// Set maxFramesInFlight to a small value to avoid memory contention for real-time processing.
self.mediapipeGraph.maxFramesInFlight = 2;
}
// In this application, there is only one ViewController which has no navigation to other view
// controllers, and there is only one View with live display showing the result of running the
// MediaPipe graph on the live video feed. If more view controllers are needed later, the graph
// setup/teardown and camera start/stop logic should be updated appropriately in response to the
// appearance/disappearance of this ViewController, as viewWillAppear: can be invoked multiple times
// depending on the application navigation flow in that case.
- (void)viewWillAppear:(BOOL)animated {
[super viewWillAppear:animated];
[_cameraSource requestCameraAccessWithCompletionHandler:^void(BOOL granted) {
if (granted) {
[self startGraphAndCamera];
dispatch_async(dispatch_get_main_queue(), ^{
[_noCameraLabel setHidden:YES];
});
}
}];
}
- (void)startGraphAndCamera {
// Start running self.mediapipeGraph.
NSError* error;
if (![self.mediapipeGraph startWithError:&error]) {
NSLog(@"Failed to start graph: %@", error);
}
// Start fetching frames from the camera.
dispatch_async(_videoQueue, ^{
[_cameraSource start];
});
}
#pragma mark - MPPGraphDelegate methods
// Receives CVPixelBufferRef from the MediaPipe graph. Invoked on a MediaPipe worker thread.
- (void)mediapipeGraph:(MPPGraph*)graph
didOutputPixelBuffer:(CVPixelBufferRef)pixelBuffer
fromStream:(const std::string&)streamName {
if (streamName == kOutputStream) {
// Display the captured image on the screen.
CVPixelBufferRetain(pixelBuffer);
dispatch_async(dispatch_get_main_queue(), ^{
[_renderer renderPixelBuffer:pixelBuffer];
CVPixelBufferRelease(pixelBuffer);
});
}
}
#pragma mark - MPPInputSourceDelegate methods
// Must be invoked on _videoQueue.
- (void)processVideoFrame:(CVPixelBufferRef)imageBuffer
timestamp:(CMTime)timestamp
fromSource:(MPPInputSource*)source {
if (source != _cameraSource) {
NSLog(@"Unknown source: %@", source);
return;
}
[self.mediapipeGraph sendPixelBuffer:imageBuffer
intoStream:kInputStream
packetType:MPPPacketTypePixelBuffer];
}
@end

View File

@ -1,59 +0,0 @@
// Copyright 2019 The MediaPipe Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#import "AppDelegate.h"
@interface AppDelegate ()
@end
@implementation AppDelegate
- (BOOL)application:(UIApplication *)application
didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
// Override point for customization after application launch.
return YES;
}
- (void)applicationWillResignActive:(UIApplication *)application {
// Sent when the application is about to move from active to inactive state. This can occur for
// certain types of temporary interruptions (such as an incoming phone call or SMS message) or
// when the user quits the application and it begins the transition to the background state. Use
// this method to pause ongoing tasks, disable timers, and invalidate graphics rendering
// callbacks. Games should use this method to pause the game.
}
- (void)applicationDidEnterBackground:(UIApplication *)application {
// Use this method to release shared resources, save user data, invalidate timers, and store
// enough application state information to restore your application to its current state in case
// it is terminated later. If your application supports background execution, this method is
// called instead of applicationWillTerminate: when the user quits.
}
- (void)applicationWillEnterForeground:(UIApplication *)application {
// Called as part of the transition from the background to the active state; here you can undo
// many of the changes made on entering the background.
}
- (void)applicationDidBecomeActive:(UIApplication *)application {
// Restart any tasks that were paused (or not yet started) while the application was inactive. If
// the application was previously in the background, optionally refresh the user interface.
}
- (void)applicationWillTerminate:(UIApplication *)application {
// Called when the application is about to terminate. Save data if appropriate. See also
// applicationDidEnterBackground:.
}
@end

View File

@ -1,7 +0,0 @@
{
"info" : {
"version" : 1,
"author" : "xcode"
}
}

View File

@ -33,12 +33,16 @@ alias(
ios_application( ios_application(
name = "FaceDetectionCpuApp", name = "FaceDetectionCpuApp",
app_icons = ["//mediapipe/examples/ios/common:AppIcon"],
bundle_id = BUNDLE_ID_PREFIX + ".FaceDetectionCpu", bundle_id = BUNDLE_ID_PREFIX + ".FaceDetectionCpu",
families = [ families = [
"iphone", "iphone",
"ipad", "ipad",
], ],
infoplists = ["Info.plist"], infoplists = [
"//mediapipe/examples/ios/common:Info.plist",
"Info.plist",
],
minimum_os_version = MIN_IOS_VERSION, minimum_os_version = MIN_IOS_VERSION,
provisioning_profile = example_provisioning(), provisioning_profile = example_provisioning(),
deps = [ deps = [
@ -49,32 +53,13 @@ ios_application(
objc_library( objc_library(
name = "FaceDetectionCpuAppLibrary", name = "FaceDetectionCpuAppLibrary",
srcs = [
"AppDelegate.m",
"ViewController.mm",
"main.m",
],
hdrs = [
"AppDelegate.h",
"ViewController.h",
],
data = [ data = [
"Base.lproj/LaunchScreen.storyboard",
"Base.lproj/Main.storyboard",
"//mediapipe/graphs/face_detection:mobile_cpu_binary_graph", "//mediapipe/graphs/face_detection:mobile_cpu_binary_graph",
"//mediapipe/models:face_detection_front.tflite", "//mediapipe/models:face_detection_front.tflite",
"//mediapipe/models:face_detection_front_labelmap.txt", "//mediapipe/models:face_detection_front_labelmap.txt",
], ],
sdk_frameworks = [
"AVFoundation",
"CoreGraphics",
"CoreMedia",
"UIKit",
],
deps = [ deps = [
"//mediapipe/objc:mediapipe_framework_ios", "//mediapipe/examples/ios/common:CommonMediaPipeAppLibrary",
"//mediapipe/objc:mediapipe_input_sources_ios",
"//mediapipe/objc:mediapipe_layer_renderer",
] + select({ ] + select({
"//mediapipe:ios_i386": [], "//mediapipe:ios_i386": [],
"//mediapipe:ios_x86_64": [], "//mediapipe:ios_x86_64": [],

View File

@ -1,25 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<document type="com.apple.InterfaceBuilder3.CocoaTouch.Storyboard.XIB" version="3.0" toolsVersion="13122.16" targetRuntime="iOS.CocoaTouch" propertyAccessControl="none" useAutolayout="YES" launchScreen="YES" useTraitCollections="YES" useSafeAreas="YES" colorMatched="YES" initialViewController="01J-lp-oVM">
<dependencies>
<plugIn identifier="com.apple.InterfaceBuilder.IBCocoaTouchPlugin" version="13104.12"/>
<capability name="Safe area layout guides" minToolsVersion="9.0"/>
<capability name="documents saved in the Xcode 8 format" minToolsVersion="8.0"/>
</dependencies>
<scenes>
<!--View Controller-->
<scene sceneID="EHf-IW-A2E">
<objects>
<viewController id="01J-lp-oVM" sceneMemberID="viewController">
<view key="view" contentMode="scaleToFill" id="Ze5-6b-2t3">
<rect key="frame" x="0.0" y="0.0" width="375" height="667"/>
<autoresizingMask key="autoresizingMask" widthSizable="YES" heightSizable="YES"/>
<color key="backgroundColor" red="1" green="1" blue="1" alpha="1" colorSpace="custom" customColorSpace="sRGB"/>
<viewLayoutGuide key="safeArea" id="6Tk-OE-BBY"/>
</view>
</viewController>
<placeholder placeholderIdentifier="IBFirstResponder" id="iYj-Kq-Ea1" userLabel="First Responder" sceneMemberID="firstResponder"/>
</objects>
<point key="canvasLocation" x="53" y="375"/>
</scene>
</scenes>
</document>

View File

@ -1,49 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<document type="com.apple.InterfaceBuilder3.CocoaTouch.Storyboard.XIB" version="3.0" toolsVersion="16097.2" targetRuntime="iOS.CocoaTouch" propertyAccessControl="none" useAutolayout="YES" useTraitCollections="YES" useSafeAreas="YES" colorMatched="YES" initialViewController="BYZ-38-t0r">
<device id="retina4_7" orientation="portrait" appearance="light"/>
<dependencies>
<plugIn identifier="com.apple.InterfaceBuilder.IBCocoaTouchPlugin" version="16087"/>
<capability name="Safe area layout guides" minToolsVersion="9.0"/>
<capability name="documents saved in the Xcode 8 format" minToolsVersion="8.0"/>
</dependencies>
<scenes>
<!--View Controller-->
<scene sceneID="tne-QT-ifu">
<objects>
<viewController id="BYZ-38-t0r" customClass="ViewController" sceneMemberID="viewController">
<view key="view" contentMode="scaleToFill" id="8bC-Xf-vdC">
<rect key="frame" x="0.0" y="0.0" width="375" height="667"/>
<autoresizingMask key="autoresizingMask" widthSizable="YES" heightSizable="YES"/>
<subviews>
<view contentMode="scaleToFill" fixedFrame="YES" translatesAutoresizingMaskIntoConstraints="NO" id="EfB-xq-knP">
<rect key="frame" x="0.0" y="0.0" width="375" height="667"/>
<autoresizingMask key="autoresizingMask" widthSizable="YES" heightSizable="YES"/>
<subviews>
<label opaque="NO" userInteractionEnabled="NO" contentMode="left" horizontalHuggingPriority="251" verticalHuggingPriority="251" fixedFrame="YES" text="Camera access needed for this demo. Please enable camera access in the Settings app." textAlignment="center" lineBreakMode="tailTruncation" numberOfLines="0" baselineAdjustment="alignBaselines" adjustsFontSizeToFit="NO" translatesAutoresizingMaskIntoConstraints="NO" id="emf-N5-sEd">
<rect key="frame" x="57" y="258" width="260" height="151"/>
<autoresizingMask key="autoresizingMask" flexibleMinX="YES" flexibleMaxX="YES" flexibleMinY="YES" flexibleMaxY="YES"/>
<fontDescription key="fontDescription" type="system" pointSize="17"/>
<color key="textColor" white="1" alpha="1" colorSpace="custom" customColorSpace="genericGamma22GrayColorSpace"/>
<nil key="highlightedColor"/>
</label>
</subviews>
<color key="backgroundColor" white="0.0" alpha="1" colorSpace="custom" customColorSpace="genericGamma22GrayColorSpace"/>
<accessibility key="accessibilityConfiguration" label="PreviewDisplayView">
<bool key="isElement" value="YES"/>
</accessibility>
</view>
</subviews>
<color key="backgroundColor" red="1" green="1" blue="1" alpha="1" colorSpace="custom" customColorSpace="sRGB"/>
<viewLayoutGuide key="safeArea" id="6Tk-OE-BBY"/>
</view>
<connections>
<outlet property="_liveView" destination="EfB-xq-knP" id="JQp-2n-q9q"/>
<outlet property="_noCameraLabel" destination="emf-N5-sEd" id="91G-3Z-cU3"/>
</connections>
</viewController>
<placeholder placeholderIdentifier="IBFirstResponder" id="dkx-z0-nzr" sceneMemberID="firstResponder"/>
</objects>
<point key="canvasLocation" x="48.799999999999997" y="20.239880059970016"/>
</scene>
</scenes>
</document>

View File

@ -2,41 +2,13 @@
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0"> <plist version="1.0">
<dict> <dict>
<key>NSCameraUsageDescription</key> <key>CameraPosition</key>
<string>This app uses the camera to demonstrate live video processing.</string> <string>front</string>
<key>CFBundleDevelopmentRegion</key> <key>GraphOutputStream</key>
<string>en</string> <string>output_video</string>
<key>CFBundleExecutable</key> <key>GraphInputStream</key>
<string>$(EXECUTABLE_NAME)</string> <string>input_video</string>
<key>CFBundleIdentifier</key> <key>GraphName</key>
<string>$(PRODUCT_BUNDLE_IDENTIFIER)</string> <string>mobile_cpu</string>
<key>CFBundleInfoDictionaryVersion</key>
<string>6.0</string>
<key>CFBundleName</key>
<string>$(PRODUCT_NAME)</string>
<key>CFBundlePackageType</key>
<string>APPL</string>
<key>CFBundleShortVersionString</key>
<string>1.0</string>
<key>CFBundleVersion</key>
<string>1</string>
<key>LSRequiresIPhoneOS</key>
<true/>
<key>UILaunchStoryboardName</key>
<string>LaunchScreen</string>
<key>UIMainStoryboardFile</key>
<string>Main</string>
<key>UIRequiredDeviceCapabilities</key>
<array>
<string>armv7</string>
</array>
<key>UISupportedInterfaceOrientations</key>
<array>
<string>UIInterfaceOrientationPortrait</string>
</array>
<key>UISupportedInterfaceOrientations~ipad</key>
<array>
<string>UIInterfaceOrientationPortrait</string>
</array>
</dict> </dict>
</plist> </plist>

View File

@ -1,19 +0,0 @@
// Copyright 2019 The MediaPipe Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#import <UIKit/UIKit.h>
@interface ViewController : UIViewController
@end

View File

@ -1,178 +0,0 @@
// Copyright 2019 The MediaPipe Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#import "ViewController.h"
#import "mediapipe/objc/MPPGraph.h"
#import "mediapipe/objc/MPPCameraInputSource.h"
#import "mediapipe/objc/MPPLayerRenderer.h"
static NSString* const kGraphName = @"mobile_cpu";
static const char* kInputStream = "input_video";
static const char* kOutputStream = "output_video";
static const char* kVideoQueueLabel = "com.google.mediapipe.example.videoQueue";
@interface ViewController () <MPPGraphDelegate, MPPInputSourceDelegate>
// The MediaPipe graph currently in use. Initialized in viewDidLoad, started in viewWillAppear: and
// sent video frames on _videoQueue.
@property(nonatomic) MPPGraph* mediapipeGraph;
@end
@implementation ViewController {
/// Handles camera access via AVCaptureSession library.
MPPCameraInputSource* _cameraSource;
/// Inform the user when camera is unavailable.
IBOutlet UILabel* _noCameraLabel;
/// Display the camera preview frames.
IBOutlet UIView* _liveView;
/// Render frames in a layer.
MPPLayerRenderer* _renderer;
/// Process camera frames on this queue.
dispatch_queue_t _videoQueue;
}
#pragma mark - Cleanup methods
- (void)dealloc {
self.mediapipeGraph.delegate = nil;
[self.mediapipeGraph cancel];
// Ignore errors since we're cleaning up.
[self.mediapipeGraph closeAllInputStreamsWithError:nil];
[self.mediapipeGraph waitUntilDoneWithError:nil];
}
#pragma mark - MediaPipe graph methods
+ (MPPGraph*)loadGraphFromResource:(NSString*)resource {
// Load the graph config resource.
NSError* configLoadError = nil;
NSBundle* bundle = [NSBundle bundleForClass:[self class]];
if (!resource || resource.length == 0) {
return nil;
}
NSURL* graphURL = [bundle URLForResource:resource withExtension:@"binarypb"];
NSData* data = [NSData dataWithContentsOfURL:graphURL options:0 error:&configLoadError];
if (!data) {
NSLog(@"Failed to load MediaPipe graph config: %@", configLoadError);
return nil;
}
// Parse the graph config resource into mediapipe::CalculatorGraphConfig proto object.
mediapipe::CalculatorGraphConfig config;
config.ParseFromArray(data.bytes, data.length);
// Create MediaPipe graph with mediapipe::CalculatorGraphConfig proto object.
MPPGraph* newGraph = [[MPPGraph alloc] initWithGraphConfig:config];
[newGraph addFrameOutputStream:kOutputStream outputPacketType:MPPPacketTypePixelBuffer];
return newGraph;
}
#pragma mark - UIViewController methods
- (void)viewDidLoad {
[super viewDidLoad];
_renderer = [[MPPLayerRenderer alloc] init];
_renderer.layer.frame = _liveView.layer.bounds;
[_liveView.layer addSublayer:_renderer.layer];
_renderer.frameScaleMode = MPPFrameScaleModeFillAndCrop;
dispatch_queue_attr_t qosAttribute = dispatch_queue_attr_make_with_qos_class(
DISPATCH_QUEUE_SERIAL, QOS_CLASS_USER_INTERACTIVE, /*relative_priority=*/0);
_videoQueue = dispatch_queue_create(kVideoQueueLabel, qosAttribute);
_cameraSource = [[MPPCameraInputSource alloc] init];
[_cameraSource setDelegate:self queue:_videoQueue];
_cameraSource.sessionPreset = AVCaptureSessionPresetHigh;
_cameraSource.cameraPosition = AVCaptureDevicePositionFront;
// The frame's native format is rotated with respect to the portrait orientation.
_cameraSource.orientation = AVCaptureVideoOrientationPortrait;
// When using the front camera, mirror the input for a more natural look.
_cameraSource.videoMirrored = YES;
self.mediapipeGraph = [[self class] loadGraphFromResource:kGraphName];
self.mediapipeGraph.delegate = self;
// Set maxFramesInFlight to a small value to avoid memory contention for real-time processing.
self.mediapipeGraph.maxFramesInFlight = 2;
}
// In this application, there is only one ViewController which has no navigation to other view
// controllers, and there is only one View with live display showing the result of running the
// MediaPipe graph on the live video feed. If more view controllers are needed later, the graph
// setup/teardown and camera start/stop logic should be updated appropriately in response to the
// appearance/disappearance of this ViewController, as viewWillAppear: can be invoked multiple times
// depending on the application navigation flow in that case.
- (void)viewWillAppear:(BOOL)animated {
[super viewWillAppear:animated];
[_cameraSource requestCameraAccessWithCompletionHandler:^void(BOOL granted) {
if (granted) {
[self startGraphAndCamera];
dispatch_async(dispatch_get_main_queue(), ^{
[_noCameraLabel setHidden:YES];
});
}
}];
}
- (void)startGraphAndCamera {
// Start running self.mediapipeGraph.
NSError* error;
if (![self.mediapipeGraph startWithError:&error]) {
NSLog(@"Failed to start graph: %@", error);
}
// Start fetching frames from the camera.
dispatch_async(_videoQueue, ^{
[_cameraSource start];
});
}
#pragma mark - MPPGraphDelegate methods
// Receives CVPixelBufferRef from the MediaPipe graph. Invoked on a MediaPipe worker thread.
- (void)mediapipeGraph:(MPPGraph*)graph
didOutputPixelBuffer:(CVPixelBufferRef)pixelBuffer
fromStream:(const std::string&)streamName {
if (streamName == kOutputStream) {
// Display the captured image on the screen.
CVPixelBufferRetain(pixelBuffer);
dispatch_async(dispatch_get_main_queue(), ^{
[_renderer renderPixelBuffer:pixelBuffer];
CVPixelBufferRelease(pixelBuffer);
});
}
}
#pragma mark - MPPInputSourceDelegate methods
// Must be invoked on _videoQueue.
- (void)processVideoFrame:(CVPixelBufferRef)imageBuffer
timestamp:(CMTime)timestamp
fromSource:(MPPInputSource*)source {
if (source != _cameraSource) {
NSLog(@"Unknown source: %@", source);
return;
}
[self.mediapipeGraph sendPixelBuffer:imageBuffer
intoStream:kInputStream
packetType:MPPPacketTypePixelBuffer];
}
@end

View File

@ -1,22 +0,0 @@
// Copyright 2019 The MediaPipe Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#import <UIKit/UIKit.h>
#import "AppDelegate.h"
int main(int argc, char * argv[]) {
@autoreleasepool {
return UIApplicationMain(argc, argv, nil, NSStringFromClass([AppDelegate class]));
}
}

View File

@ -1,59 +0,0 @@
// Copyright 2019 The MediaPipe Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#import "AppDelegate.h"
@interface AppDelegate ()
@end
@implementation AppDelegate
- (BOOL)application:(UIApplication *)application
didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
// Override point for customization after application launch.
return YES;
}
- (void)applicationWillResignActive:(UIApplication *)application {
// Sent when the application is about to move from active to inactive state. This can occur for
// certain types of temporary interruptions (such as an incoming phone call or SMS message) or
// when the user quits the application and it begins the transition to the background state. Use
// this method to pause ongoing tasks, disable timers, and invalidate graphics rendering
// callbacks. Games should use this method to pause the game.
}
- (void)applicationDidEnterBackground:(UIApplication *)application {
// Use this method to release shared resources, save user data, invalidate timers, and store
// enough application state information to restore your application to its current state in case
// it is terminated later. If your application supports background execution, this method is
// called instead of applicationWillTerminate: when the user quits.
}
- (void)applicationWillEnterForeground:(UIApplication *)application {
// Called as part of the transition from the background to the active state; here you can undo
// many of the changes made on entering the background.
}
- (void)applicationDidBecomeActive:(UIApplication *)application {
// Restart any tasks that were paused (or not yet started) while the application was inactive. If
// the application was previously in the background, optionally refresh the user interface.
}
- (void)applicationWillTerminate:(UIApplication *)application {
// Called when the application is about to terminate. Save data if appropriate. See also
// applicationDidEnterBackground:.
}
@end

View File

@ -1,99 +0,0 @@
{
"images" : [
{
"idiom" : "iphone",
"size" : "20x20",
"scale" : "2x"
},
{
"idiom" : "iphone",
"size" : "20x20",
"scale" : "3x"
},
{
"idiom" : "iphone",
"size" : "29x29",
"scale" : "2x"
},
{
"idiom" : "iphone",
"size" : "29x29",
"scale" : "3x"
},
{
"idiom" : "iphone",
"size" : "40x40",
"scale" : "2x"
},
{
"idiom" : "iphone",
"size" : "40x40",
"scale" : "3x"
},
{
"idiom" : "iphone",
"size" : "60x60",
"scale" : "2x"
},
{
"idiom" : "iphone",
"size" : "60x60",
"scale" : "3x"
},
{
"idiom" : "ipad",
"size" : "20x20",
"scale" : "1x"
},
{
"idiom" : "ipad",
"size" : "20x20",
"scale" : "2x"
},
{
"idiom" : "ipad",
"size" : "29x29",
"scale" : "1x"
},
{
"idiom" : "ipad",
"size" : "29x29",
"scale" : "2x"
},
{
"idiom" : "ipad",
"size" : "40x40",
"scale" : "1x"
},
{
"idiom" : "ipad",
"size" : "40x40",
"scale" : "2x"
},
{
"idiom" : "ipad",
"size" : "76x76",
"scale" : "1x"
},
{
"idiom" : "ipad",
"size" : "76x76",
"scale" : "2x"
},
{
"idiom" : "ipad",
"size" : "83.5x83.5",
"scale" : "2x"
},
{
"idiom" : "ios-marketing",
"size" : "1024x1024",
"scale" : "1x"
}
],
"info" : {
"version" : 1,
"author" : "xcode"
}
}

View File

@ -1,7 +0,0 @@
{
"info" : {
"version" : 1,
"author" : "xcode"
}
}

View File

@ -33,12 +33,16 @@ alias(
ios_application( ios_application(
name = "FaceDetectionGpuApp", name = "FaceDetectionGpuApp",
app_icons = ["//mediapipe/examples/ios/common:AppIcon"],
bundle_id = BUNDLE_ID_PREFIX + ".FaceDetectionGpu", bundle_id = BUNDLE_ID_PREFIX + ".FaceDetectionGpu",
families = [ families = [
"iphone", "iphone",
"ipad", "ipad",
], ],
infoplists = ["Info.plist"], infoplists = [
"//mediapipe/examples/ios/common:Info.plist",
"Info.plist",
],
minimum_os_version = MIN_IOS_VERSION, minimum_os_version = MIN_IOS_VERSION,
provisioning_profile = example_provisioning(), provisioning_profile = example_provisioning(),
deps = [ deps = [
@ -49,32 +53,13 @@ ios_application(
objc_library( objc_library(
name = "FaceDetectionGpuAppLibrary", name = "FaceDetectionGpuAppLibrary",
srcs = [
"AppDelegate.m",
"ViewController.mm",
"main.m",
],
hdrs = [
"AppDelegate.h",
"ViewController.h",
],
data = [ data = [
"Base.lproj/LaunchScreen.storyboard",
"Base.lproj/Main.storyboard",
"//mediapipe/graphs/face_detection:mobile_gpu_binary_graph", "//mediapipe/graphs/face_detection:mobile_gpu_binary_graph",
"//mediapipe/models:face_detection_front.tflite", "//mediapipe/models:face_detection_front.tflite",
"//mediapipe/models:face_detection_front_labelmap.txt", "//mediapipe/models:face_detection_front_labelmap.txt",
], ],
sdk_frameworks = [
"AVFoundation",
"CoreGraphics",
"CoreMedia",
"UIKit",
],
deps = [ deps = [
"//mediapipe/objc:mediapipe_framework_ios", "//mediapipe/examples/ios/common:CommonMediaPipeAppLibrary",
"//mediapipe/objc:mediapipe_input_sources_ios",
"//mediapipe/objc:mediapipe_layer_renderer",
] + select({ ] + select({
"//mediapipe:ios_i386": [], "//mediapipe:ios_i386": [],
"//mediapipe:ios_x86_64": [], "//mediapipe:ios_x86_64": [],

View File

@ -1,25 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<document type="com.apple.InterfaceBuilder3.CocoaTouch.Storyboard.XIB" version="3.0" toolsVersion="13122.16" targetRuntime="iOS.CocoaTouch" propertyAccessControl="none" useAutolayout="YES" launchScreen="YES" useTraitCollections="YES" useSafeAreas="YES" colorMatched="YES" initialViewController="01J-lp-oVM">
<dependencies>
<plugIn identifier="com.apple.InterfaceBuilder.IBCocoaTouchPlugin" version="13104.12"/>
<capability name="Safe area layout guides" minToolsVersion="9.0"/>
<capability name="documents saved in the Xcode 8 format" minToolsVersion="8.0"/>
</dependencies>
<scenes>
<!--View Controller-->
<scene sceneID="EHf-IW-A2E">
<objects>
<viewController id="01J-lp-oVM" sceneMemberID="viewController">
<view key="view" contentMode="scaleToFill" id="Ze5-6b-2t3">
<rect key="frame" x="0.0" y="0.0" width="375" height="667"/>
<autoresizingMask key="autoresizingMask" widthSizable="YES" heightSizable="YES"/>
<color key="backgroundColor" red="1" green="1" blue="1" alpha="1" colorSpace="custom" customColorSpace="sRGB"/>
<viewLayoutGuide key="safeArea" id="6Tk-OE-BBY"/>
</view>
</viewController>
<placeholder placeholderIdentifier="IBFirstResponder" id="iYj-Kq-Ea1" userLabel="First Responder" sceneMemberID="firstResponder"/>
</objects>
<point key="canvasLocation" x="53" y="375"/>
</scene>
</scenes>
</document>

View File

@ -1,49 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<document type="com.apple.InterfaceBuilder3.CocoaTouch.Storyboard.XIB" version="3.0" toolsVersion="16097.2" targetRuntime="iOS.CocoaTouch" propertyAccessControl="none" useAutolayout="YES" useTraitCollections="YES" useSafeAreas="YES" colorMatched="YES" initialViewController="BYZ-38-t0r">
<device id="retina4_7" orientation="portrait" appearance="light"/>
<dependencies>
<plugIn identifier="com.apple.InterfaceBuilder.IBCocoaTouchPlugin" version="16087"/>
<capability name="Safe area layout guides" minToolsVersion="9.0"/>
<capability name="documents saved in the Xcode 8 format" minToolsVersion="8.0"/>
</dependencies>
<scenes>
<!--View Controller-->
<scene sceneID="tne-QT-ifu">
<objects>
<viewController id="BYZ-38-t0r" customClass="ViewController" sceneMemberID="viewController">
<view key="view" contentMode="scaleToFill" id="8bC-Xf-vdC">
<rect key="frame" x="0.0" y="0.0" width="375" height="667"/>
<autoresizingMask key="autoresizingMask" widthSizable="YES" heightSizable="YES"/>
<subviews>
<view contentMode="scaleToFill" fixedFrame="YES" translatesAutoresizingMaskIntoConstraints="NO" id="EfB-xq-knP">
<rect key="frame" x="0.0" y="0.0" width="375" height="667"/>
<autoresizingMask key="autoresizingMask" widthSizable="YES" heightSizable="YES"/>
<subviews>
<label opaque="NO" userInteractionEnabled="NO" contentMode="left" horizontalHuggingPriority="251" verticalHuggingPriority="251" fixedFrame="YES" text="Camera access needed for this demo. Please enable camera access in the Settings app." textAlignment="center" lineBreakMode="tailTruncation" numberOfLines="0" baselineAdjustment="alignBaselines" adjustsFontSizeToFit="NO" translatesAutoresizingMaskIntoConstraints="NO" id="emf-N5-sEd">
<rect key="frame" x="57" y="258" width="260" height="151"/>
<autoresizingMask key="autoresizingMask" flexibleMinX="YES" flexibleMaxX="YES" flexibleMinY="YES" flexibleMaxY="YES"/>
<fontDescription key="fontDescription" type="system" pointSize="17"/>
<color key="textColor" white="1" alpha="1" colorSpace="custom" customColorSpace="genericGamma22GrayColorSpace"/>
<nil key="highlightedColor"/>
</label>
</subviews>
<color key="backgroundColor" white="0.0" alpha="1" colorSpace="custom" customColorSpace="genericGamma22GrayColorSpace"/>
<accessibility key="accessibilityConfiguration" label="PreviewDisplayView">
<bool key="isElement" value="YES"/>
</accessibility>
</view>
</subviews>
<color key="backgroundColor" red="1" green="1" blue="1" alpha="1" colorSpace="custom" customColorSpace="sRGB"/>
<viewLayoutGuide key="safeArea" id="6Tk-OE-BBY"/>
</view>
<connections>
<outlet property="_liveView" destination="EfB-xq-knP" id="JQp-2n-q9q"/>
<outlet property="_noCameraLabel" destination="emf-N5-sEd" id="91G-3Z-cU3"/>
</connections>
</viewController>
<placeholder placeholderIdentifier="IBFirstResponder" id="dkx-z0-nzr" sceneMemberID="firstResponder"/>
</objects>
<point key="canvasLocation" x="48.799999999999997" y="20.239880059970016"/>
</scene>
</scenes>
</document>

View File

@ -2,41 +2,13 @@
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0"> <plist version="1.0">
<dict> <dict>
<key>NSCameraUsageDescription</key> <key>CameraPosition</key>
<string>This app uses the camera to demonstrate live video processing.</string> <string>front</string>
<key>CFBundleDevelopmentRegion</key> <key>GraphOutputStream</key>
<string>en</string> <string>output_video</string>
<key>CFBundleExecutable</key> <key>GraphInputStream</key>
<string>$(EXECUTABLE_NAME)</string> <string>input_video</string>
<key>CFBundleIdentifier</key> <key>GraphName</key>
<string>$(PRODUCT_BUNDLE_IDENTIFIER)</string> <string>mobile_gpu</string>
<key>CFBundleInfoDictionaryVersion</key>
<string>6.0</string>
<key>CFBundleName</key>
<string>$(PRODUCT_NAME)</string>
<key>CFBundlePackageType</key>
<string>APPL</string>
<key>CFBundleShortVersionString</key>
<string>1.0</string>
<key>CFBundleVersion</key>
<string>1</string>
<key>LSRequiresIPhoneOS</key>
<true/>
<key>UILaunchStoryboardName</key>
<string>LaunchScreen</string>
<key>UIMainStoryboardFile</key>
<string>Main</string>
<key>UIRequiredDeviceCapabilities</key>
<array>
<string>armv7</string>
</array>
<key>UISupportedInterfaceOrientations</key>
<array>
<string>UIInterfaceOrientationPortrait</string>
</array>
<key>UISupportedInterfaceOrientations~ipad</key>
<array>
<string>UIInterfaceOrientationPortrait</string>
</array>
</dict> </dict>
</plist> </plist>

View File

@ -1,19 +0,0 @@
// Copyright 2019 The MediaPipe Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#import <UIKit/UIKit.h>
@interface ViewController : UIViewController
@end

View File

@ -1,178 +0,0 @@
// Copyright 2019 The MediaPipe Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#import "ViewController.h"
#import "mediapipe/objc/MPPGraph.h"
#import "mediapipe/objc/MPPCameraInputSource.h"
#import "mediapipe/objc/MPPLayerRenderer.h"
static NSString* const kGraphName = @"mobile_gpu";
static const char* kInputStream = "input_video";
static const char* kOutputStream = "output_video";
static const char* kVideoQueueLabel = "com.google.mediapipe.example.videoQueue";
@interface ViewController () <MPPGraphDelegate, MPPInputSourceDelegate>
// The MediaPipe graph currently in use. Initialized in viewDidLoad, started in viewWillAppear: and
// sent video frames on _videoQueue.
@property(nonatomic) MPPGraph* mediapipeGraph;
@end
@implementation ViewController {
/// Handles camera access via AVCaptureSession library.
MPPCameraInputSource* _cameraSource;
/// Inform the user when camera is unavailable.
IBOutlet UILabel* _noCameraLabel;
/// Display the camera preview frames.
IBOutlet UIView* _liveView;
/// Render frames in a layer.
MPPLayerRenderer* _renderer;
/// Process camera frames on this queue.
dispatch_queue_t _videoQueue;
}
#pragma mark - Cleanup methods
- (void)dealloc {
self.mediapipeGraph.delegate = nil;
[self.mediapipeGraph cancel];
// Ignore errors since we're cleaning up.
[self.mediapipeGraph closeAllInputStreamsWithError:nil];
[self.mediapipeGraph waitUntilDoneWithError:nil];
}
#pragma mark - MediaPipe graph methods
+ (MPPGraph*)loadGraphFromResource:(NSString*)resource {
// Load the graph config resource.
NSError* configLoadError = nil;
NSBundle* bundle = [NSBundle bundleForClass:[self class]];
if (!resource || resource.length == 0) {
return nil;
}
NSURL* graphURL = [bundle URLForResource:resource withExtension:@"binarypb"];
NSData* data = [NSData dataWithContentsOfURL:graphURL options:0 error:&configLoadError];
if (!data) {
NSLog(@"Failed to load MediaPipe graph config: %@", configLoadError);
return nil;
}
// Parse the graph config resource into mediapipe::CalculatorGraphConfig proto object.
mediapipe::CalculatorGraphConfig config;
config.ParseFromArray(data.bytes, data.length);
// Create MediaPipe graph with mediapipe::CalculatorGraphConfig proto object.
MPPGraph* newGraph = [[MPPGraph alloc] initWithGraphConfig:config];
[newGraph addFrameOutputStream:kOutputStream outputPacketType:MPPPacketTypePixelBuffer];
return newGraph;
}
#pragma mark - UIViewController methods
- (void)viewDidLoad {
[super viewDidLoad];
_renderer = [[MPPLayerRenderer alloc] init];
_renderer.layer.frame = _liveView.layer.bounds;
[_liveView.layer addSublayer:_renderer.layer];
_renderer.frameScaleMode = MPPFrameScaleModeFillAndCrop;
dispatch_queue_attr_t qosAttribute = dispatch_queue_attr_make_with_qos_class(
DISPATCH_QUEUE_SERIAL, QOS_CLASS_USER_INTERACTIVE, /*relative_priority=*/0);
_videoQueue = dispatch_queue_create(kVideoQueueLabel, qosAttribute);
_cameraSource = [[MPPCameraInputSource alloc] init];
[_cameraSource setDelegate:self queue:_videoQueue];
_cameraSource.sessionPreset = AVCaptureSessionPresetHigh;
_cameraSource.cameraPosition = AVCaptureDevicePositionFront;
// The frame's native format is rotated with respect to the portrait orientation.
_cameraSource.orientation = AVCaptureVideoOrientationPortrait;
// When using the front camera, mirror the input for a more natural look.
_cameraSource.videoMirrored = YES;
self.mediapipeGraph = [[self class] loadGraphFromResource:kGraphName];
self.mediapipeGraph.delegate = self;
// Set maxFramesInFlight to a small value to avoid memory contention for real-time processing.
self.mediapipeGraph.maxFramesInFlight = 2;
}
// In this application, there is only one ViewController which has no navigation to other view
// controllers, and there is only one View with live display showing the result of running the
// MediaPipe graph on the live video feed. If more view controllers are needed later, the graph
// setup/teardown and camera start/stop logic should be updated appropriately in response to the
// appearance/disappearance of this ViewController, as viewWillAppear: can be invoked multiple times
// depending on the application navigation flow in that case.
- (void)viewWillAppear:(BOOL)animated {
[super viewWillAppear:animated];
[_cameraSource requestCameraAccessWithCompletionHandler:^void(BOOL granted) {
if (granted) {
[self startGraphAndCamera];
dispatch_async(dispatch_get_main_queue(), ^{
[_noCameraLabel setHidden:YES];
});
}
}];
}
- (void)startGraphAndCamera {
// Start running self.mediapipeGraph.
NSError* error;
if (![self.mediapipeGraph startWithError:&error]) {
NSLog(@"Failed to start graph: %@", error);
}
// Start fetching frames from the camera.
dispatch_async(_videoQueue, ^{
[_cameraSource start];
});
}
#pragma mark - MPPGraphDelegate methods
// Receives CVPixelBufferRef from the MediaPipe graph. Invoked on a MediaPipe worker thread.
- (void)mediapipeGraph:(MPPGraph*)graph
didOutputPixelBuffer:(CVPixelBufferRef)pixelBuffer
fromStream:(const std::string&)streamName {
if (streamName == kOutputStream) {
// Display the captured image on the screen.
CVPixelBufferRetain(pixelBuffer);
dispatch_async(dispatch_get_main_queue(), ^{
[_renderer renderPixelBuffer:pixelBuffer];
CVPixelBufferRelease(pixelBuffer);
});
}
}
#pragma mark - MPPInputSourceDelegate methods
// Must be invoked on _videoQueue.
- (void)processVideoFrame:(CVPixelBufferRef)imageBuffer
timestamp:(CMTime)timestamp
fromSource:(MPPInputSource*)source {
if (source != _cameraSource) {
NSLog(@"Unknown source: %@", source);
return;
}
[self.mediapipeGraph sendPixelBuffer:imageBuffer
intoStream:kInputStream
packetType:MPPPacketTypePixelBuffer];
}
@end

View File

@ -1,22 +0,0 @@
// Copyright 2019 The MediaPipe Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#import <UIKit/UIKit.h>
#import "AppDelegate.h"
int main(int argc, char * argv[]) {
@autoreleasepool {
return UIApplicationMain(argc, argv, nil, NSStringFromClass([AppDelegate class]));
}
}

View File

@ -1,99 +0,0 @@
{
"images" : [
{
"idiom" : "iphone",
"size" : "20x20",
"scale" : "2x"
},
{
"idiom" : "iphone",
"size" : "20x20",
"scale" : "3x"
},
{
"idiom" : "iphone",
"size" : "29x29",
"scale" : "2x"
},
{
"idiom" : "iphone",
"size" : "29x29",
"scale" : "3x"
},
{
"idiom" : "iphone",
"size" : "40x40",
"scale" : "2x"
},
{
"idiom" : "iphone",
"size" : "40x40",
"scale" : "3x"
},
{
"idiom" : "iphone",
"size" : "60x60",
"scale" : "2x"
},
{
"idiom" : "iphone",
"size" : "60x60",
"scale" : "3x"
},
{
"idiom" : "ipad",
"size" : "20x20",
"scale" : "1x"
},
{
"idiom" : "ipad",
"size" : "20x20",
"scale" : "2x"
},
{
"idiom" : "ipad",
"size" : "29x29",
"scale" : "1x"
},
{
"idiom" : "ipad",
"size" : "29x29",
"scale" : "2x"
},
{
"idiom" : "ipad",
"size" : "40x40",
"scale" : "1x"
},
{
"idiom" : "ipad",
"size" : "40x40",
"scale" : "2x"
},
{
"idiom" : "ipad",
"size" : "76x76",
"scale" : "1x"
},
{
"idiom" : "ipad",
"size" : "76x76",
"scale" : "2x"
},
{
"idiom" : "ipad",
"size" : "83.5x83.5",
"scale" : "2x"
},
{
"idiom" : "ios-marketing",
"size" : "1024x1024",
"scale" : "1x"
}
],
"info" : {
"version" : 1,
"author" : "xcode"
}
}

View File

@ -1,7 +0,0 @@
{
"info" : {
"version" : 1,
"author" : "xcode"
}
}

View File

@ -33,12 +33,16 @@ alias(
ios_application( ios_application(
name = "FaceMeshGpuApp", name = "FaceMeshGpuApp",
app_icons = ["//mediapipe/examples/ios/common:AppIcon"],
bundle_id = BUNDLE_ID_PREFIX + ".FaceMeshGpu", bundle_id = BUNDLE_ID_PREFIX + ".FaceMeshGpu",
families = [ families = [
"iphone", "iphone",
"ipad", "ipad",
], ],
infoplists = ["Info.plist"], infoplists = [
"//mediapipe/examples/ios/common:Info.plist",
"Info.plist",
],
minimum_os_version = MIN_IOS_VERSION, minimum_os_version = MIN_IOS_VERSION,
provisioning_profile = example_provisioning(), provisioning_profile = example_provisioning(),
deps = [ deps = [
@ -50,31 +54,18 @@ ios_application(
objc_library( objc_library(
name = "FaceMeshGpuAppLibrary", name = "FaceMeshGpuAppLibrary",
srcs = [ srcs = [
"AppDelegate.m", "FaceMeshGpuViewController.mm",
"ViewController.mm",
"main.m",
], ],
hdrs = [ hdrs = [
"AppDelegate.h", "FaceMeshGpuViewController.h",
"ViewController.h",
], ],
data = [ data = [
"Base.lproj/LaunchScreen.storyboard",
"Base.lproj/Main.storyboard",
"//mediapipe/graphs/face_mesh:face_mesh_mobile_gpu_binary_graph", "//mediapipe/graphs/face_mesh:face_mesh_mobile_gpu_binary_graph",
"//mediapipe/modules/face_detection:face_detection_front.tflite", "//mediapipe/modules/face_detection:face_detection_front.tflite",
"//mediapipe/modules/face_landmark:face_landmark.tflite", "//mediapipe/modules/face_landmark:face_landmark.tflite",
], ],
sdk_frameworks = [
"AVFoundation",
"CoreGraphics",
"CoreMedia",
"UIKit",
],
deps = [ deps = [
"//mediapipe/objc:mediapipe_framework_ios", "//mediapipe/examples/ios/common:CommonMediaPipeAppLibrary",
"//mediapipe/objc:mediapipe_input_sources_ios",
"//mediapipe/objc:mediapipe_layer_renderer",
] + select({ ] + select({
"//mediapipe:ios_i386": [], "//mediapipe:ios_i386": [],
"//mediapipe:ios_x86_64": [], "//mediapipe:ios_x86_64": [],

View File

@ -1,25 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<document type="com.apple.InterfaceBuilder3.CocoaTouch.Storyboard.XIB" version="3.0" toolsVersion="13122.16" targetRuntime="iOS.CocoaTouch" propertyAccessControl="none" useAutolayout="YES" launchScreen="YES" useTraitCollections="YES" useSafeAreas="YES" colorMatched="YES" initialViewController="01J-lp-oVM">
<dependencies>
<plugIn identifier="com.apple.InterfaceBuilder.IBCocoaTouchPlugin" version="13104.12"/>
<capability name="Safe area layout guides" minToolsVersion="9.0"/>
<capability name="documents saved in the Xcode 8 format" minToolsVersion="8.0"/>
</dependencies>
<scenes>
<!--View Controller-->
<scene sceneID="EHf-IW-A2E">
<objects>
<viewController id="01J-lp-oVM" sceneMemberID="viewController">
<view key="view" contentMode="scaleToFill" id="Ze5-6b-2t3">
<rect key="frame" x="0.0" y="0.0" width="375" height="667"/>
<autoresizingMask key="autoresizingMask" widthSizable="YES" heightSizable="YES"/>
<color key="backgroundColor" red="1" green="1" blue="1" alpha="1" colorSpace="custom" customColorSpace="sRGB"/>
<viewLayoutGuide key="safeArea" id="6Tk-OE-BBY"/>
</view>
</viewController>
<placeholder placeholderIdentifier="IBFirstResponder" id="iYj-Kq-Ea1" userLabel="First Responder" sceneMemberID="firstResponder"/>
</objects>
<point key="canvasLocation" x="53" y="375"/>
</scene>
</scenes>
</document>

View File

@ -1,49 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<document type="com.apple.InterfaceBuilder3.CocoaTouch.Storyboard.XIB" version="3.0" toolsVersion="16097.2" targetRuntime="iOS.CocoaTouch" propertyAccessControl="none" useAutolayout="YES" useTraitCollections="YES" useSafeAreas="YES" colorMatched="YES" initialViewController="BYZ-38-t0r">
<device id="retina4_7" orientation="portrait" appearance="light"/>
<dependencies>
<plugIn identifier="com.apple.InterfaceBuilder.IBCocoaTouchPlugin" version="16087"/>
<capability name="Safe area layout guides" minToolsVersion="9.0"/>
<capability name="documents saved in the Xcode 8 format" minToolsVersion="8.0"/>
</dependencies>
<scenes>
<!--View Controller-->
<scene sceneID="tne-QT-ifu">
<objects>
<viewController id="BYZ-38-t0r" customClass="ViewController" sceneMemberID="viewController">
<view key="view" contentMode="scaleToFill" id="8bC-Xf-vdC">
<rect key="frame" x="0.0" y="0.0" width="375" height="667"/>
<autoresizingMask key="autoresizingMask" widthSizable="YES" heightSizable="YES"/>
<subviews>
<view contentMode="scaleToFill" fixedFrame="YES" translatesAutoresizingMaskIntoConstraints="NO" id="EfB-xq-knP">
<rect key="frame" x="0.0" y="0.0" width="375" height="667"/>
<autoresizingMask key="autoresizingMask" widthSizable="YES" heightSizable="YES"/>
<subviews>
<label opaque="NO" userInteractionEnabled="NO" contentMode="left" horizontalHuggingPriority="251" verticalHuggingPriority="251" fixedFrame="YES" text="Camera access needed for this demo. Please enable camera access in the Settings app." textAlignment="center" lineBreakMode="tailTruncation" numberOfLines="0" baselineAdjustment="alignBaselines" adjustsFontSizeToFit="NO" translatesAutoresizingMaskIntoConstraints="NO" id="emf-N5-sEd">
<rect key="frame" x="57" y="258" width="260" height="151"/>
<autoresizingMask key="autoresizingMask" flexibleMinX="YES" flexibleMaxX="YES" flexibleMinY="YES" flexibleMaxY="YES"/>
<fontDescription key="fontDescription" type="system" pointSize="17"/>
<color key="textColor" white="1" alpha="1" colorSpace="custom" customColorSpace="genericGamma22GrayColorSpace"/>
<nil key="highlightedColor"/>
</label>
</subviews>
<color key="backgroundColor" white="0.0" alpha="1" colorSpace="custom" customColorSpace="genericGamma22GrayColorSpace"/>
<accessibility key="accessibilityConfiguration" label="PreviewDisplayView">
<bool key="isElement" value="YES"/>
</accessibility>
</view>
</subviews>
<color key="backgroundColor" red="1" green="1" blue="1" alpha="1" colorSpace="custom" customColorSpace="sRGB"/>
<viewLayoutGuide key="safeArea" id="6Tk-OE-BBY"/>
</view>
<connections>
<outlet property="_liveView" destination="EfB-xq-knP" id="JQp-2n-q9q"/>
<outlet property="_noCameraLabel" destination="emf-N5-sEd" id="91G-3Z-cU3"/>
</connections>
</viewController>
<placeholder placeholderIdentifier="IBFirstResponder" id="dkx-z0-nzr" sceneMemberID="firstResponder"/>
</objects>
<point key="canvasLocation" x="48.799999999999997" y="20.239880059970016"/>
</scene>
</scenes>
</document>

View File

@ -14,8 +14,8 @@
#import <UIKit/UIKit.h> #import <UIKit/UIKit.h>
@interface AppDelegate : UIResponder <UIApplicationDelegate> #import "mediapipe/examples/ios/common/CommonViewController.h"
@property(strong, nonatomic) UIWindow *window; @interface FaceMeshGpuViewController : CommonViewController
@end @end

View File

@ -0,0 +1,65 @@
// Copyright 2019 The MediaPipe Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#import "FaceMeshGpuViewController.h"
#include "mediapipe/framework/formats/landmark.pb.h"
static NSString* const kGraphName = @"face_mesh_mobile_gpu";
static const char* kNumFacesInputSidePacket = "num_faces";
static const char* kLandmarksOutputStream = "multi_face_landmarks";
// Max number of faces to detect/process.
static const int kNumFaces = 1;
@implementation FaceMeshGpuViewController
#pragma mark - UIViewController methods
- (void)viewDidLoad {
[super viewDidLoad];
[self.mediapipeGraph setSidePacket:(mediapipe::MakePacket<int>(kNumFaces))
named:kNumFacesInputSidePacket];
[self.mediapipeGraph addFrameOutputStream:kLandmarksOutputStream
outputPacketType:MPPPacketTypeRaw];
}
#pragma mark - MPPGraphDelegate methods
// Receives a raw packet from the MediaPipe graph. Invoked on a MediaPipe worker thread.
- (void)mediapipeGraph:(MPPGraph*)graph
didOutputPacket:(const ::mediapipe::Packet&)packet
fromStream:(const std::string&)streamName {
if (streamName == kLandmarksOutputStream) {
if (packet.IsEmpty()) {
NSLog(@"[TS:%lld] No face landmarks", packet.Timestamp().Value());
return;
}
const auto& multi_face_landmarks = packet.Get<std::vector<::mediapipe::NormalizedLandmarkList>>();
NSLog(@"[TS:%lld] Number of face instances with landmarks: %lu", packet.Timestamp().Value(),
multi_face_landmarks.size());
for (int face_index = 0; face_index < multi_face_landmarks.size(); ++face_index) {
const auto& landmarks = multi_face_landmarks[face_index];
NSLog(@"\tNumber of landmarks for face[%d]: %d", face_index, landmarks.landmark_size());
for (int i = 0; i < landmarks.landmark_size(); ++i) {
NSLog(@"\t\tLandmark[%d]: (%f, %f, %f)", i, landmarks.landmark(i).x(),
landmarks.landmark(i).y(), landmarks.landmark(i).z());
}
}
}
}
@end

View File

@ -2,41 +2,15 @@
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0"> <plist version="1.0">
<dict> <dict>
<key>NSCameraUsageDescription</key> <key>CameraPosition</key>
<string>This app uses the camera to demonstrate live video processing.</string> <string>front</string>
<key>CFBundleDevelopmentRegion</key> <key>MainViewController</key>
<string>en</string> <string>FaceMeshGpuViewController</string>
<key>CFBundleExecutable</key> <key>GraphOutputStream</key>
<string>$(EXECUTABLE_NAME)</string> <string>output_video</string>
<key>CFBundleIdentifier</key> <key>GraphInputStream</key>
<string>$(PRODUCT_BUNDLE_IDENTIFIER)</string> <string>input_video</string>
<key>CFBundleInfoDictionaryVersion</key> <key>GraphName</key>
<string>6.0</string> <string>face_mesh_mobile_gpu</string>
<key>CFBundleName</key>
<string>$(PRODUCT_NAME)</string>
<key>CFBundlePackageType</key>
<string>APPL</string>
<key>CFBundleShortVersionString</key>
<string>1.0</string>
<key>CFBundleVersion</key>
<string>1</string>
<key>LSRequiresIPhoneOS</key>
<true/>
<key>UILaunchStoryboardName</key>
<string>LaunchScreen</string>
<key>UIMainStoryboardFile</key>
<string>Main</string>
<key>UIRequiredDeviceCapabilities</key>
<array>
<string>armv7</string>
</array>
<key>UISupportedInterfaceOrientations</key>
<array>
<string>UIInterfaceOrientationPortrait</string>
</array>
<key>UISupportedInterfaceOrientations~ipad</key>
<array>
<string>UIInterfaceOrientationPortrait</string>
</array>
</dict> </dict>
</plist> </plist>

View File

@ -1,19 +0,0 @@
// Copyright 2019 The MediaPipe Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#import <UIKit/UIKit.h>
@interface ViewController : UIViewController
@end

View File

@ -1,210 +0,0 @@
// Copyright 2019 The MediaPipe Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#import "ViewController.h"
#import "mediapipe/objc/MPPCameraInputSource.h"
#import "mediapipe/objc/MPPGraph.h"
#import "mediapipe/objc/MPPLayerRenderer.h"
#include "mediapipe/framework/formats/landmark.pb.h"
static NSString* const kGraphName = @"face_mesh_mobile_gpu";
static const char* kInputStream = "input_video";
static const char* kNumFacesInputSidePacket = "num_faces";
static const char* kOutputStream = "output_video";
static const char* kLandmarksOutputStream = "multi_face_landmarks";
static const char* kVideoQueueLabel = "com.google.mediapipe.example.videoQueue";
// Max number of faces to detect/process.
static const int kNumFaces = 1;
@interface ViewController () <MPPGraphDelegate, MPPInputSourceDelegate>
// The MediaPipe graph currently in use. Initialized in viewDidLoad, started in viewWillAppear: and
// sent video frames on _videoQueue.
@property(nonatomic) MPPGraph* mediapipeGraph;
@end
@implementation ViewController {
/// Handles camera access via AVCaptureSession library.
MPPCameraInputSource* _cameraSource;
/// Inform the user when camera is unavailable.
IBOutlet UILabel* _noCameraLabel;
/// Display the camera preview frames.
IBOutlet UIView* _liveView;
/// Render frames in a layer.
MPPLayerRenderer* _renderer;
/// Process camera frames on this queue.
dispatch_queue_t _videoQueue;
}
#pragma mark - Cleanup methods
- (void)dealloc {
self.mediapipeGraph.delegate = nil;
[self.mediapipeGraph cancel];
// Ignore errors since we're cleaning up.
[self.mediapipeGraph closeAllInputStreamsWithError:nil];
[self.mediapipeGraph waitUntilDoneWithError:nil];
}
#pragma mark - MediaPipe graph methods
+ (MPPGraph*)loadGraphFromResource:(NSString*)resource {
// Load the graph config resource.
NSError* configLoadError = nil;
NSBundle* bundle = [NSBundle bundleForClass:[self class]];
if (!resource || resource.length == 0) {
return nil;
}
NSURL* graphURL = [bundle URLForResource:resource withExtension:@"binarypb"];
NSData* data = [NSData dataWithContentsOfURL:graphURL options:0 error:&configLoadError];
if (!data) {
NSLog(@"Failed to load MediaPipe graph config: %@", configLoadError);
return nil;
}
// Parse the graph config resource into mediapipe::CalculatorGraphConfig proto object.
mediapipe::CalculatorGraphConfig config;
config.ParseFromArray(data.bytes, data.length);
// Create MediaPipe graph with mediapipe::CalculatorGraphConfig proto object.
MPPGraph* newGraph = [[MPPGraph alloc] initWithGraphConfig:config];
[newGraph addFrameOutputStream:kOutputStream outputPacketType:MPPPacketTypePixelBuffer];
[newGraph addFrameOutputStream:kLandmarksOutputStream outputPacketType:MPPPacketTypeRaw];
[newGraph setSidePacket:(mediapipe::MakePacket<int>(kNumFaces)) named:kNumFacesInputSidePacket];
return newGraph;
}
#pragma mark - UIViewController methods
- (void)viewDidLoad {
[super viewDidLoad];
_renderer = [[MPPLayerRenderer alloc] init];
_renderer.layer.frame = _liveView.layer.bounds;
[_liveView.layer addSublayer:_renderer.layer];
_renderer.frameScaleMode = MPPFrameScaleModeFillAndCrop;
dispatch_queue_attr_t qosAttribute = dispatch_queue_attr_make_with_qos_class(
DISPATCH_QUEUE_SERIAL, QOS_CLASS_USER_INTERACTIVE, /*relative_priority=*/0);
_videoQueue = dispatch_queue_create(kVideoQueueLabel, qosAttribute);
_cameraSource = [[MPPCameraInputSource alloc] init];
[_cameraSource setDelegate:self queue:_videoQueue];
_cameraSource.sessionPreset = AVCaptureSessionPresetHigh;
_cameraSource.cameraPosition = AVCaptureDevicePositionFront;
// The frame's native format is rotated with respect to the portrait orientation.
_cameraSource.orientation = AVCaptureVideoOrientationPortrait;
// When using the front camera, mirror the input for a more natural look.
_cameraSource.videoMirrored = YES;
self.mediapipeGraph = [[self class] loadGraphFromResource:kGraphName];
self.mediapipeGraph.delegate = self;
// Set maxFramesInFlight to a small value to avoid memory contention for real-time processing.
self.mediapipeGraph.maxFramesInFlight = 2;
}
// In this application, there is only one ViewController which has no navigation to other view
// controllers, and there is only one View with live display showing the result of running the
// MediaPipe graph on the live video feed. If more view controllers are needed later, the graph
// setup/teardown and camera start/stop logic should be updated appropriately in response to the
// appearance/disappearance of this ViewController, as viewWillAppear: can be invoked multiple times
// depending on the application navigation flow in that case.
- (void)viewWillAppear:(BOOL)animated {
[super viewWillAppear:animated];
[_cameraSource requestCameraAccessWithCompletionHandler:^void(BOOL granted) {
if (granted) {
[self startGraphAndCamera];
dispatch_async(dispatch_get_main_queue(), ^{
_noCameraLabel.hidden = YES;
});
}
}];
}
- (void)startGraphAndCamera {
// Start running self.mediapipeGraph.
NSError* error;
if (![self.mediapipeGraph startWithError:&error]) {
NSLog(@"Failed to start graph: %@", error);
}
// Start fetching frames from the camera.
dispatch_async(_videoQueue, ^{
[_cameraSource start];
});
}
#pragma mark - MPPGraphDelegate methods
// Receives CVPixelBufferRef from the MediaPipe graph. Invoked on a MediaPipe worker thread.
- (void)mediapipeGraph:(MPPGraph*)graph
didOutputPixelBuffer:(CVPixelBufferRef)pixelBuffer
fromStream:(const std::string&)streamName {
if (streamName == kOutputStream) {
// Display the captured image on the screen.
CVPixelBufferRetain(pixelBuffer);
dispatch_async(dispatch_get_main_queue(), ^{
[_renderer renderPixelBuffer:pixelBuffer];
CVPixelBufferRelease(pixelBuffer);
});
}
}
// Receives a raw packet from the MediaPipe graph. Invoked on a MediaPipe worker thread.
- (void)mediapipeGraph:(MPPGraph*)graph
didOutputPacket:(const ::mediapipe::Packet&)packet
fromStream:(const std::string&)streamName {
if (streamName == kLandmarksOutputStream) {
if (packet.IsEmpty()) {
NSLog(@"[TS:%lld] No face landmarks", packet.Timestamp().Value());
return;
}
const auto& multi_face_landmarks = packet.Get<std::vector<::mediapipe::NormalizedLandmarkList>>();
NSLog(@"[TS:%lld] Number of face instances with landmarks: %lu", packet.Timestamp().Value(),
multi_face_landmarks.size());
for (int face_index = 0; face_index < multi_face_landmarks.size(); ++face_index) {
const auto& landmarks = multi_face_landmarks[face_index];
NSLog(@"\tNumber of landmarks for face[%d]: %d", face_index, landmarks.landmark_size());
for (int i = 0; i < landmarks.landmark_size(); ++i) {
NSLog(@"\t\tLandmark[%d]: (%f, %f, %f)", i, landmarks.landmark(i).x(),
landmarks.landmark(i).y(), landmarks.landmark(i).z());
}
}
}
}
#pragma mark - MPPInputSourceDelegate methods
// Must be invoked on _videoQueue.
- (void)processVideoFrame:(CVPixelBufferRef)imageBuffer
timestamp:(CMTime)timestamp
fromSource:(MPPInputSource*)source {
if (source != _cameraSource) {
NSLog(@"Unknown source: %@", source);
return;
}
[self.mediapipeGraph sendPixelBuffer:imageBuffer
intoStream:kInputStream
packetType:MPPPacketTypePixelBuffer];
}
@end

View File

@ -1,22 +0,0 @@
// Copyright 2019 The MediaPipe Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#import <UIKit/UIKit.h>
#import "AppDelegate.h"
int main(int argc, char * argv[]) {
@autoreleasepool {
return UIApplicationMain(argc, argv, nil, NSStringFromClass([AppDelegate class]));
}
}

View File

@ -1,21 +0,0 @@
// Copyright 2019 The MediaPipe Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#import <UIKit/UIKit.h>
@interface AppDelegate : UIResponder <UIApplicationDelegate>
@property(strong, nonatomic) UIWindow *window;
@end

View File

@ -1,59 +0,0 @@
// Copyright 2019 The MediaPipe Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#import "AppDelegate.h"
@interface AppDelegate ()
@end
@implementation AppDelegate
- (BOOL)application:(UIApplication *)application
didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
// Override point for customization after application launch.
return YES;
}
- (void)applicationWillResignActive:(UIApplication *)application {
// Sent when the application is about to move from active to inactive state. This can occur for
// certain types of temporary interruptions (such as an incoming phone call or SMS message) or
// when the user quits the application and it begins the transition to the background state. Use
// this method to pause ongoing tasks, disable timers, and invalidate graphics rendering
// callbacks. Games should use this method to pause the game.
}
- (void)applicationDidEnterBackground:(UIApplication *)application {
// Use this method to release shared resources, save user data, invalidate timers, and store
// enough application state information to restore your application to its current state in case
// it is terminated later. If your application supports background execution, this method is
// called instead of applicationWillTerminate: when the user quits.
}
- (void)applicationWillEnterForeground:(UIApplication *)application {
// Called as part of the transition from the background to the active state; here you can undo
// many of the changes made on entering the background.
}
- (void)applicationDidBecomeActive:(UIApplication *)application {
// Restart any tasks that were paused (or not yet started) while the application was inactive. If
// the application was previously in the background, optionally refresh the user interface.
}
- (void)applicationWillTerminate:(UIApplication *)application {
// Called when the application is about to terminate. Save data if appropriate. See also
// applicationDidEnterBackground:.
}
@end

View File

@ -1,99 +0,0 @@
{
"images" : [
{
"idiom" : "iphone",
"size" : "20x20",
"scale" : "2x"
},
{
"idiom" : "iphone",
"size" : "20x20",
"scale" : "3x"
},
{
"idiom" : "iphone",
"size" : "29x29",
"scale" : "2x"
},
{
"idiom" : "iphone",
"size" : "29x29",
"scale" : "3x"
},
{
"idiom" : "iphone",
"size" : "40x40",
"scale" : "2x"
},
{
"idiom" : "iphone",
"size" : "40x40",
"scale" : "3x"
},
{
"idiom" : "iphone",
"size" : "60x60",
"scale" : "2x"
},
{
"idiom" : "iphone",
"size" : "60x60",
"scale" : "3x"
},
{
"idiom" : "ipad",
"size" : "20x20",
"scale" : "1x"
},
{
"idiom" : "ipad",
"size" : "20x20",
"scale" : "2x"
},
{
"idiom" : "ipad",
"size" : "29x29",
"scale" : "1x"
},
{
"idiom" : "ipad",
"size" : "29x29",
"scale" : "2x"
},
{
"idiom" : "ipad",
"size" : "40x40",
"scale" : "1x"
},
{
"idiom" : "ipad",
"size" : "40x40",
"scale" : "2x"
},
{
"idiom" : "ipad",
"size" : "76x76",
"scale" : "1x"
},
{
"idiom" : "ipad",
"size" : "76x76",
"scale" : "2x"
},
{
"idiom" : "ipad",
"size" : "83.5x83.5",
"scale" : "2x"
},
{
"idiom" : "ios-marketing",
"size" : "1024x1024",
"scale" : "1x"
}
],
"info" : {
"version" : 1,
"author" : "xcode"
}
}

View File

@ -1,7 +0,0 @@
{
"info" : {
"version" : 1,
"author" : "xcode"
}
}

View File

@ -33,12 +33,16 @@ alias(
ios_application( ios_application(
name = "HandDetectionGpuApp", name = "HandDetectionGpuApp",
app_icons = ["//mediapipe/examples/ios/common:AppIcon"],
bundle_id = BUNDLE_ID_PREFIX + ".HandDetectionGpu", bundle_id = BUNDLE_ID_PREFIX + ".HandDetectionGpu",
families = [ families = [
"iphone", "iphone",
"ipad", "ipad",
], ],
infoplists = ["Info.plist"], infoplists = [
"//mediapipe/examples/ios/common:Info.plist",
"Info.plist",
],
minimum_os_version = MIN_IOS_VERSION, minimum_os_version = MIN_IOS_VERSION,
provisioning_profile = example_provisioning(), provisioning_profile = example_provisioning(),
deps = [ deps = [
@ -49,32 +53,13 @@ ios_application(
objc_library( objc_library(
name = "HandDetectionGpuAppLibrary", name = "HandDetectionGpuAppLibrary",
srcs = [
"AppDelegate.m",
"ViewController.mm",
"main.m",
],
hdrs = [
"AppDelegate.h",
"ViewController.h",
],
data = [ data = [
"Base.lproj/LaunchScreen.storyboard",
"Base.lproj/Main.storyboard",
"//mediapipe/graphs/hand_tracking:hand_detection_mobile_gpu_binary_graph", "//mediapipe/graphs/hand_tracking:hand_detection_mobile_gpu_binary_graph",
"//mediapipe/models:palm_detection.tflite", "//mediapipe/models:palm_detection.tflite",
"//mediapipe/models:palm_detection_labelmap.txt", "//mediapipe/models:palm_detection_labelmap.txt",
], ],
sdk_frameworks = [
"AVFoundation",
"CoreGraphics",
"CoreMedia",
"UIKit",
],
deps = [ deps = [
"//mediapipe/objc:mediapipe_framework_ios", "//mediapipe/examples/ios/common:CommonMediaPipeAppLibrary",
"//mediapipe/objc:mediapipe_input_sources_ios",
"//mediapipe/objc:mediapipe_layer_renderer",
] + select({ ] + select({
"//mediapipe:ios_i386": [], "//mediapipe:ios_i386": [],
"//mediapipe:ios_x86_64": [], "//mediapipe:ios_x86_64": [],

View File

@ -1,25 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<document type="com.apple.InterfaceBuilder3.CocoaTouch.Storyboard.XIB" version="3.0" toolsVersion="13122.16" targetRuntime="iOS.CocoaTouch" propertyAccessControl="none" useAutolayout="YES" launchScreen="YES" useTraitCollections="YES" useSafeAreas="YES" colorMatched="YES" initialViewController="01J-lp-oVM">
<dependencies>
<plugIn identifier="com.apple.InterfaceBuilder.IBCocoaTouchPlugin" version="13104.12"/>
<capability name="Safe area layout guides" minToolsVersion="9.0"/>
<capability name="documents saved in the Xcode 8 format" minToolsVersion="8.0"/>
</dependencies>
<scenes>
<!--View Controller-->
<scene sceneID="EHf-IW-A2E">
<objects>
<viewController id="01J-lp-oVM" sceneMemberID="viewController">
<view key="view" contentMode="scaleToFill" id="Ze5-6b-2t3">
<rect key="frame" x="0.0" y="0.0" width="375" height="667"/>
<autoresizingMask key="autoresizingMask" widthSizable="YES" heightSizable="YES"/>
<color key="backgroundColor" red="1" green="1" blue="1" alpha="1" colorSpace="custom" customColorSpace="sRGB"/>
<viewLayoutGuide key="safeArea" id="6Tk-OE-BBY"/>
</view>
</viewController>
<placeholder placeholderIdentifier="IBFirstResponder" id="iYj-Kq-Ea1" userLabel="First Responder" sceneMemberID="firstResponder"/>
</objects>
<point key="canvasLocation" x="53" y="375"/>
</scene>
</scenes>
</document>

View File

@ -1,49 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<document type="com.apple.InterfaceBuilder3.CocoaTouch.Storyboard.XIB" version="3.0" toolsVersion="16097.2" targetRuntime="iOS.CocoaTouch" propertyAccessControl="none" useAutolayout="YES" useTraitCollections="YES" useSafeAreas="YES" colorMatched="YES" initialViewController="BYZ-38-t0r">
<device id="retina4_7" orientation="portrait" appearance="light"/>
<dependencies>
<plugIn identifier="com.apple.InterfaceBuilder.IBCocoaTouchPlugin" version="16087"/>
<capability name="Safe area layout guides" minToolsVersion="9.0"/>
<capability name="documents saved in the Xcode 8 format" minToolsVersion="8.0"/>
</dependencies>
<scenes>
<!--View Controller-->
<scene sceneID="tne-QT-ifu">
<objects>
<viewController id="BYZ-38-t0r" customClass="ViewController" sceneMemberID="viewController">
<view key="view" contentMode="scaleToFill" id="8bC-Xf-vdC">
<rect key="frame" x="0.0" y="0.0" width="375" height="667"/>
<autoresizingMask key="autoresizingMask" widthSizable="YES" heightSizable="YES"/>
<subviews>
<view contentMode="scaleToFill" fixedFrame="YES" translatesAutoresizingMaskIntoConstraints="NO" id="EfB-xq-knP">
<rect key="frame" x="0.0" y="0.0" width="375" height="667"/>
<autoresizingMask key="autoresizingMask" widthSizable="YES" heightSizable="YES"/>
<subviews>
<label opaque="NO" userInteractionEnabled="NO" contentMode="left" horizontalHuggingPriority="251" verticalHuggingPriority="251" fixedFrame="YES" text="Camera access needed for this demo. Please enable camera access in the Settings app." textAlignment="center" lineBreakMode="tailTruncation" numberOfLines="0" baselineAdjustment="alignBaselines" adjustsFontSizeToFit="NO" translatesAutoresizingMaskIntoConstraints="NO" id="emf-N5-sEd">
<rect key="frame" x="57" y="258" width="260" height="151"/>
<autoresizingMask key="autoresizingMask" flexibleMinX="YES" flexibleMaxX="YES" flexibleMinY="YES" flexibleMaxY="YES"/>
<fontDescription key="fontDescription" type="system" pointSize="17"/>
<color key="textColor" white="1" alpha="1" colorSpace="custom" customColorSpace="genericGamma22GrayColorSpace"/>
<nil key="highlightedColor"/>
</label>
</subviews>
<color key="backgroundColor" white="0.0" alpha="1" colorSpace="custom" customColorSpace="genericGamma22GrayColorSpace"/>
<accessibility key="accessibilityConfiguration" label="PreviewDisplayView">
<bool key="isElement" value="YES"/>
</accessibility>
</view>
</subviews>
<color key="backgroundColor" red="1" green="1" blue="1" alpha="1" colorSpace="custom" customColorSpace="sRGB"/>
<viewLayoutGuide key="safeArea" id="6Tk-OE-BBY"/>
</view>
<connections>
<outlet property="_liveView" destination="EfB-xq-knP" id="JQp-2n-q9q"/>
<outlet property="_noCameraLabel" destination="emf-N5-sEd" id="91G-3Z-cU3"/>
</connections>
</viewController>
<placeholder placeholderIdentifier="IBFirstResponder" id="dkx-z0-nzr" sceneMemberID="firstResponder"/>
</objects>
<point key="canvasLocation" x="48.799999999999997" y="20.239880059970016"/>
</scene>
</scenes>
</document>

Some files were not shown because too many files have changed in this diff Show More