Merged ios-ml-image with master
This commit is contained in:
commit
0e944cb764
|
@ -1,27 +0,0 @@
|
|||
---
|
||||
name: "Build/Installation Issue"
|
||||
about: Use this template for build/installation issues
|
||||
labels: type:build/install
|
||||
|
||||
---
|
||||
<em>Please make sure that this is a build/installation issue and also refer to the [troubleshooting](https://google.github.io/mediapipe/getting_started/troubleshooting.html) documentation before raising any issues.</em>
|
||||
|
||||
**System information** (Please provide as much relevant information as possible)
|
||||
- OS Platform and Distribution (e.g. Linux Ubuntu 16.04, Android 11, iOS 14.4):
|
||||
- Compiler version (e.g. gcc/g++ 8 /Apple clang version 12.0.0):
|
||||
- Programming Language and version ( e.g. C++ 14, Python 3.6, Java ):
|
||||
- Installed using virtualenv? pip? Conda? (if python):
|
||||
- [MediaPipe version](https://github.com/google/mediapipe/releases):
|
||||
- Bazel version:
|
||||
- XCode and Tulsi versions (if iOS):
|
||||
- Android SDK and NDK versions (if android):
|
||||
- Android [AAR](https://google.github.io/mediapipe/getting_started/android_archive_library.html) ( if android):
|
||||
- OpenCV version (if running on desktop):
|
||||
|
||||
**Describe the problem**:
|
||||
|
||||
|
||||
**[Provide the exact sequence of commands / steps that you executed before running into the problem](https://google.github.io/mediapipe/getting_started/getting_started.html):**
|
||||
|
||||
**Complete Logs:**
|
||||
Include Complete Log information or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached:
|
25
.github/ISSUE_TEMPLATE/11-tasks-issue.md
vendored
25
.github/ISSUE_TEMPLATE/11-tasks-issue.md
vendored
|
@ -1,25 +0,0 @@
|
|||
---
|
||||
name: "Tasks Issue"
|
||||
about: Use this template for assistance with using MediaPipe Tasks (developers.google.com/mediapipe/solutions) to deploy on-device ML solutions (e.g. gesture recognition etc.) on supported platforms.
|
||||
labels: type:support
|
||||
|
||||
---
|
||||
<em>Please make sure that this is a [Tasks](https://developers.google.com/mediapipe/solutions) issue.<em>
|
||||
|
||||
**System information** (Please provide as much relevant information as possible)
|
||||
- Have I written custom code (as opposed to using a stock example script provided in MediaPipe):
|
||||
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04, Android 11, iOS 14.4):
|
||||
- MediaPipe Tasks SDK version:
|
||||
- Task name (e.g. Object detection, Gesture recognition etc.):
|
||||
- Programming Language and version ( e.g. C++, Python, Java):
|
||||
|
||||
**Describe the expected behavior:**
|
||||
|
||||
**Standalone code you may have used to try to get what you need :**
|
||||
|
||||
If there is a problem, provide a reproducible test case that is the bare minimum necessary to generate the problem. If possible, please share a link to Colab, GitHub repo link or anything that we can use to reproduce the problem:
|
||||
|
||||
**Other info / Complete Logs :**
|
||||
Include any logs or source code that would be helpful to
|
||||
diagnose the problem. If including tracebacks, please include the full
|
||||
traceback. Large logs and files should be attached:
|
25
.github/ISSUE_TEMPLATE/12-model-maker-issue.md
vendored
25
.github/ISSUE_TEMPLATE/12-model-maker-issue.md
vendored
|
@ -1,25 +0,0 @@
|
|||
---
|
||||
name: "Model Maker Issue"
|
||||
about: Use this template for assistance with using MediaPipe Model Maker (developers.google.com/mediapipe/solutions) to create custom on-device ML solutions.
|
||||
labels: type:support
|
||||
|
||||
---
|
||||
<em>Please make sure that this is a [Model Maker](https://developers.google.com/mediapipe/solutions) issue.<em>
|
||||
|
||||
**System information** (Please provide as much relevant information as possible)
|
||||
- Have I written custom code (as opposed to using a stock example script provided in MediaPipe):
|
||||
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
|
||||
- Python version (e.g. 3.8):
|
||||
- [MediaPipe Model Maker version](https://pypi.org/project/mediapipe-model-maker/):
|
||||
- Task name (e.g. Image classification, Gesture recognition etc.):
|
||||
|
||||
**Describe the expected behavior:**
|
||||
|
||||
**Standalone code you may have used to try to get what you need :**
|
||||
|
||||
If there is a problem, provide a reproducible test case that is the bare minimum necessary to generate the problem. If possible, please share a link to Colab, GitHub repo link or anything that we can use to reproduce the problem:
|
||||
|
||||
**Other info / Complete Logs :**
|
||||
Include any logs or source code that would be helpful to
|
||||
diagnose the problem. If including tracebacks, please include the full
|
||||
traceback. Large logs and files should be attached:
|
26
.github/ISSUE_TEMPLATE/13-solution-issue.md
vendored
26
.github/ISSUE_TEMPLATE/13-solution-issue.md
vendored
|
@ -1,26 +0,0 @@
|
|||
---
|
||||
name: "Solution (legacy) Issue"
|
||||
about: Use this template for assistance with a specific Mediapipe solution (google.github.io/mediapipe/solutions) such as "Pose", including inference model usage/training, solution-specific calculators etc.
|
||||
labels: type:support
|
||||
|
||||
---
|
||||
<em>Please make sure that this is a [solution](https://google.github.io/mediapipe/solutions/solutions.html) issue.<em>
|
||||
|
||||
**System information** (Please provide as much relevant information as possible)
|
||||
- Have I written custom code (as opposed to using a stock example script provided in Mediapipe):
|
||||
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04, Android 11, iOS 14.4):
|
||||
- [MediaPipe version](https://github.com/google/mediapipe/releases):
|
||||
- Bazel version:
|
||||
- Solution (e.g. FaceMesh, Pose, Holistic):
|
||||
- Programming Language and version ( e.g. C++, Python, Java):
|
||||
|
||||
**Describe the expected behavior:**
|
||||
|
||||
**Standalone code you may have used to try to get what you need :**
|
||||
|
||||
If there is a problem, provide a reproducible test case that is the bare minimum necessary to generate the problem. If possible, please share a link to Colab/repo link /any notebook:
|
||||
|
||||
**Other info / Complete Logs :**
|
||||
Include any logs or source code that would be helpful to
|
||||
diagnose the problem. If including tracebacks, please include the full
|
||||
traceback. Large logs and files should be attached:
|
19
.github/ISSUE_TEMPLATE/14-studio-issue.md
vendored
19
.github/ISSUE_TEMPLATE/14-studio-issue.md
vendored
|
@ -1,19 +0,0 @@
|
|||
---
|
||||
name: "Studio Issue"
|
||||
about: Use this template for assistance with the MediaPipe Studio application.
|
||||
labels: type:support
|
||||
|
||||
---
|
||||
<em>Please make sure that this is a MediaPipe Studio issue.<em>
|
||||
|
||||
**System information** (Please provide as much relevant information as possible)
|
||||
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04, Android 11, iOS 14.4):
|
||||
- Browser and Version
|
||||
- Any microphone or camera hardware
|
||||
- URL that shows the problem
|
||||
|
||||
**Describe the expected behavior:**
|
||||
|
||||
**Other info / Complete Logs :**
|
||||
Include any js console logs that would be helpful to diagnose the problem.
|
||||
Large logs and files should be attached:
|
51
.github/ISSUE_TEMPLATE/20-documentation-issue.md
vendored
51
.github/ISSUE_TEMPLATE/20-documentation-issue.md
vendored
|
@ -1,51 +0,0 @@
|
|||
---
|
||||
name: "Documentation Issue"
|
||||
about: Use this template for documentation related issues
|
||||
labels: type:docs
|
||||
|
||||
---
|
||||
Thank you for submitting a MediaPipe documentation issue.
|
||||
The MediaPipe docs are open source! To get involved, read the documentation Contributor Guide
|
||||
## URL(s) with the issue:
|
||||
|
||||
Please provide a link to the documentation entry, for example: https://github.com/google/mediapipe/blob/master/docs/solutions/face_mesh.md#models
|
||||
|
||||
## Description of issue (what needs changing):
|
||||
|
||||
Kinds of documentation problems:
|
||||
|
||||
### Clear description
|
||||
|
||||
For example, why should someone use this method? How is it useful?
|
||||
|
||||
### Correct links
|
||||
|
||||
Is the link to the source code correct?
|
||||
|
||||
### Parameters defined
|
||||
Are all parameters defined and formatted correctly?
|
||||
|
||||
### Returns defined
|
||||
|
||||
Are return values defined?
|
||||
|
||||
### Raises listed and defined
|
||||
|
||||
Are the errors defined? For example,
|
||||
|
||||
### Usage example
|
||||
|
||||
Is there a usage example?
|
||||
|
||||
See the API guide:
|
||||
on how to write testable usage examples.
|
||||
|
||||
### Request visuals, if applicable
|
||||
|
||||
Are there currently visuals? If not, will it clarify the content?
|
||||
|
||||
### Submit a pull request?
|
||||
|
||||
Are you planning to also submit a pull request to fix the issue? See the docs
|
||||
https://github.com/google/mediapipe/blob/master/CONTRIBUTING.md
|
||||
|
32
.github/ISSUE_TEMPLATE/30-bug-issue.md
vendored
32
.github/ISSUE_TEMPLATE/30-bug-issue.md
vendored
|
@ -1,32 +0,0 @@
|
|||
---
|
||||
name: "Bug Issue"
|
||||
about: Use this template for reporting a bug
|
||||
labels: type:bug
|
||||
|
||||
---
|
||||
<em>Please make sure that this is a bug and also refer to the [troubleshooting](https://google.github.io/mediapipe/getting_started/troubleshooting.html), FAQ documentation before raising any issues.</em>
|
||||
|
||||
**System information** (Please provide as much relevant information as possible)
|
||||
|
||||
- Have I written custom code (as opposed to using a stock example script provided in MediaPipe):
|
||||
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04, Android 11, iOS 14.4):
|
||||
- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
|
||||
- Browser and version (e.g. Google Chrome, Safari) if the issue happens on browser:
|
||||
- Programming Language and version ( e.g. C++, Python, Java):
|
||||
- [MediaPipe version](https://github.com/google/mediapipe/releases):
|
||||
- Bazel version (if compiling from source):
|
||||
- Solution ( e.g. FaceMesh, Pose, Holistic ):
|
||||
- Android Studio, NDK, SDK versions (if issue is related to building in Android environment):
|
||||
- Xcode & Tulsi version (if issue is related to building for iOS):
|
||||
|
||||
**Describe the current behavior:**
|
||||
|
||||
**Describe the expected behavior:**
|
||||
|
||||
**Standalone code to reproduce the issue:**
|
||||
Provide a reproducible test case that is the bare minimum necessary to replicate the problem. If possible, please share a link to Colab/repo link /any notebook:
|
||||
|
||||
**Other info / Complete Logs :**
|
||||
Include any logs or source code that would be helpful to
|
||||
diagnose the problem. If including tracebacks, please include the full
|
||||
traceback. Large logs and files should be attached
|
24
.github/ISSUE_TEMPLATE/40-feature-request.md
vendored
24
.github/ISSUE_TEMPLATE/40-feature-request.md
vendored
|
@ -1,24 +0,0 @@
|
|||
---
|
||||
name: "Feature Request"
|
||||
about: Use this template for raising a feature request
|
||||
labels: type:feature
|
||||
|
||||
---
|
||||
<em>Please make sure that this is a feature request.</em>
|
||||
|
||||
**System information** (Please provide as much relevant information as possible)
|
||||
|
||||
- MediaPipe Solution (you are using):
|
||||
- Programming language : C++/typescript/Python/Objective C/Android Java
|
||||
- Are you willing to contribute it (Yes/No):
|
||||
|
||||
|
||||
**Describe the feature and the current behavior/state:**
|
||||
|
||||
**Will this change the current api? How?**
|
||||
|
||||
**Who will benefit with this feature?**
|
||||
|
||||
**Please specify the use cases for this feature:**
|
||||
|
||||
**Any Other info:**
|
73
.github/ISSUE_TEMPLATE/Documentation_issue_template.yaml
vendored
Normal file
73
.github/ISSUE_TEMPLATE/Documentation_issue_template.yaml
vendored
Normal file
|
@ -0,0 +1,73 @@
|
|||
name: Documentation issue
|
||||
description: Use this template for documentation related issues. If this doesn’t look right, choose a different type.
|
||||
labels: 'type:doc-bug'
|
||||
body:
|
||||
- type: markdown
|
||||
id: link
|
||||
attributes:
|
||||
value: Thank you for submitting a MediaPipe documentation issue. The MediaPipe docs are open source! To get involved, read the documentation Contributor Guide
|
||||
- type: markdown
|
||||
id: url
|
||||
attributes:
|
||||
value: URL(s) with the issue Please provide a link to the documentation entry, for example https://github.com/google/mediapipe/blob/master/docs/solutions/face_mesh.md#models
|
||||
- type: input
|
||||
id: description
|
||||
attributes:
|
||||
label: Description of issue (what needs changing)
|
||||
description: Kinds of documentation problems
|
||||
- type: input
|
||||
id: clear_desc
|
||||
attributes:
|
||||
label: Clear description
|
||||
description: For example, why should someone use this method? How is it useful?
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: link
|
||||
attributes:
|
||||
label: Correct links
|
||||
description: Is the link to the source code correct?
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: parameter
|
||||
attributes:
|
||||
label: Parameters defined
|
||||
description: Are all parameters defined and formatted correctly?
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: returns
|
||||
attributes:
|
||||
label: Returns defined
|
||||
description: Are return values defined?
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: raises
|
||||
attributes:
|
||||
label: Raises listed and defined
|
||||
description: Are the errors defined? For example,
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: usage
|
||||
attributes:
|
||||
label: Usage example
|
||||
description: Is there a usage example? See the API guide-on how to write testable usage examples.
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: visual
|
||||
attributes:
|
||||
label: Request visuals, if applicable
|
||||
description: Are there currently visuals? If not, will it clarify the content?
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: pull
|
||||
attributes:
|
||||
label: Submit a pull request?
|
||||
description: Are you planning to also submit a pull request to fix the issue? See the [docs](https://github.com/google/mediapipe/blob/master/CONTRIBUTING.md)
|
||||
validations:
|
||||
required: false
|
80
.github/ISSUE_TEMPLATE/Solution(Legacy_issue_template).yaml
vendored
Normal file
80
.github/ISSUE_TEMPLATE/Solution(Legacy_issue_template).yaml
vendored
Normal file
|
@ -0,0 +1,80 @@
|
|||
name: Solution(Legacy) Issue
|
||||
description: Use this template for assistance with a specific Mediapipe solution (google.github.io/mediapipe/solutions) such as "Pose", including inference model usage/training, solution-specific calculators etc.
|
||||
labels: 'type:support'
|
||||
body:
|
||||
- type: markdown
|
||||
id: linkmodel
|
||||
attributes:
|
||||
value: Please make sure that this is a [solution](https://google.github.io/mediapipe/solutions/solutions.html) issue.
|
||||
- type: dropdown
|
||||
id: customcode_model
|
||||
attributes:
|
||||
label: Have I written custom code (as opposed to using a stock example script provided in MediaPipe)
|
||||
options:
|
||||
- 'Yes'
|
||||
- 'No'
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: os_model
|
||||
attributes:
|
||||
label: OS Platform and Distribution
|
||||
placeholder: e.g. Linux Ubuntu 16.04, Android 11, iOS 14.4
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: mediapipe_version
|
||||
attributes:
|
||||
label: MediaPipe version
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: bazel_version
|
||||
attributes:
|
||||
label: Bazel version
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: solution
|
||||
attributes:
|
||||
label: Solution
|
||||
placeholder: e.g. FaceMesh, Pose, Holistic
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: programminglang
|
||||
attributes:
|
||||
label: Programming Language and version
|
||||
placeholder: e.g. C++, Python, Java
|
||||
validations:
|
||||
required: false
|
||||
- type: textarea
|
||||
id: current_model
|
||||
attributes:
|
||||
label: Describe the actual behavior
|
||||
render: shell
|
||||
validations:
|
||||
required: false
|
||||
- type: textarea
|
||||
id: expected_model
|
||||
attributes:
|
||||
label: Describe the expected behaviour
|
||||
render: shell
|
||||
validations:
|
||||
required: false
|
||||
- type: textarea
|
||||
id: what-happened_model
|
||||
attributes:
|
||||
label: Standalone code/steps you may have used to try to get what you need
|
||||
description: If there is a problem, provide a reproducible test case that is the bare minimum necessary to generate the problem. If possible, please share a link to Colab, GitHub repo link or anything that we can use to reproduce the problem
|
||||
render: shell
|
||||
validations:
|
||||
required: false
|
||||
- type: textarea
|
||||
id: other_info
|
||||
attributes:
|
||||
label: Other info / Complete Logs
|
||||
description: Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached
|
||||
render: shell
|
||||
validations:
|
||||
required: false
|
112
.github/ISSUE_TEMPLATE/bug_issue_template.yaml
vendored
Normal file
112
.github/ISSUE_TEMPLATE/bug_issue_template.yaml
vendored
Normal file
|
@ -0,0 +1,112 @@
|
|||
name: Bug Issues
|
||||
description: Use this template for reporting a bug. If this doesn’t look right, choose a different type.
|
||||
labels: 'type:bug'
|
||||
body:
|
||||
- type: markdown
|
||||
id: link
|
||||
attributes:
|
||||
value: Please make sure that this is a bug and also refer to the [troubleshooting](https://google.github.io/mediapipe/getting_started/troubleshooting.html), FAQ documentation before raising any issues.
|
||||
- type: dropdown
|
||||
id: customcode_model
|
||||
attributes:
|
||||
label: Have I written custom code (as opposed to using a stock example script provided in MediaPipe)
|
||||
options:
|
||||
- 'Yes'
|
||||
- 'No'
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: os
|
||||
attributes:
|
||||
label: OS Platform and Distribution
|
||||
description:
|
||||
placeholder: e.g. Linux Ubuntu 16.04, Android 11, iOS 14.4
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: mobile_device
|
||||
attributes:
|
||||
label: Mobile device if the issue happens on mobile device
|
||||
description:
|
||||
placeholder: e.g. iPhone 8, Pixel 2, Samsung Galaxy
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: browser_version
|
||||
attributes:
|
||||
label: Browser and version if the issue happens on browser
|
||||
placeholder: e.g. Google Chrome 109.0.5414.119, Safari 16.3
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: programminglang
|
||||
attributes:
|
||||
label: Programming Language and version
|
||||
placeholder: e.g. C++, Python, Java
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: mediapipever
|
||||
attributes:
|
||||
label: MediaPipe version
|
||||
description:
|
||||
placeholder: e.g. 0.8.11, 0.9.1
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: bazelver
|
||||
attributes:
|
||||
label: Bazel version
|
||||
description:
|
||||
placeholder: e.g. 5.0, 5.1
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: solution
|
||||
attributes:
|
||||
label: Solution
|
||||
placeholder: e.g. FaceMesh, Pose, Holistic
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: sdkndkversion
|
||||
attributes:
|
||||
label: Android Studio, NDK, SDK versions (if issue is related to building in Android environment)
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: xcode_ver
|
||||
attributes:
|
||||
label: Xcode & Tulsi version (if issue is related to building for iOS)
|
||||
validations:
|
||||
required: false
|
||||
- type: textarea
|
||||
id: current_model
|
||||
attributes:
|
||||
label: Describe the actual behavior
|
||||
render: shell
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: expected_model
|
||||
attributes:
|
||||
label: Describe the expected behaviour
|
||||
render: shell
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: what-happened_model
|
||||
attributes:
|
||||
label: Standalone code/steps you may have used to try to get what you need
|
||||
description: If there is a problem, provide a reproducible test case that is the bare minimum necessary to generate the problem. If possible, please share a link to Colab, GitHub repo link or anything that we can use to reproduce the problem
|
||||
render: shell
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: other_info
|
||||
attributes:
|
||||
label: Other info / Complete Logs
|
||||
description: Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached
|
||||
render: shell
|
||||
validations:
|
||||
required: false
|
109
.github/ISSUE_TEMPLATE/build.install_issue_template.yaml
vendored
Normal file
109
.github/ISSUE_TEMPLATE/build.install_issue_template.yaml
vendored
Normal file
|
@ -0,0 +1,109 @@
|
|||
name: Build/Install Issue
|
||||
description: Use this template to report build/install issue
|
||||
labels: 'type:build/install'
|
||||
body:
|
||||
- type: markdown
|
||||
id: link
|
||||
attributes:
|
||||
value: Please make sure that this is a build/installation issue and also refer to the [troubleshooting](https://google.github.io/mediapipe/getting_started/troubleshooting.html) documentation before raising any issues.
|
||||
- type: input
|
||||
id: os
|
||||
attributes:
|
||||
label: OS Platform and Distribution
|
||||
description:
|
||||
placeholder: e.g. Linux Ubuntu 16.04, Android 11, iOS 14.4
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: compilerversion
|
||||
attributes:
|
||||
label: Compiler version
|
||||
description:
|
||||
placeholder: e.g. gcc/g++ 8 /Apple clang version 12.0.0
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: programminglang
|
||||
attributes:
|
||||
label: Programming Language and version
|
||||
description:
|
||||
placeholder: e.g. C++ 14, Python 3.6, Java
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: virtualenv
|
||||
attributes:
|
||||
label: Installed using virtualenv? pip? Conda?(if python)
|
||||
description:
|
||||
placeholder:
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: mediapipever
|
||||
attributes:
|
||||
label: MediaPipe version
|
||||
description:
|
||||
placeholder: e.g. 0.8.11, 0.9.1
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: bazelver
|
||||
attributes:
|
||||
label: Bazel version
|
||||
description:
|
||||
placeholder: e.g. 5.0, 5.1
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: xcodeversion
|
||||
attributes:
|
||||
label: XCode and Tulsi versions (if iOS)
|
||||
description:
|
||||
placeholder:
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: sdkndkversion
|
||||
attributes:
|
||||
label: Android SDK and NDK versions (if android)
|
||||
description:
|
||||
placeholder:
|
||||
validations:
|
||||
required: false
|
||||
- type: dropdown
|
||||
id: androidaar
|
||||
attributes:
|
||||
label: Android AAR (if android)
|
||||
options:
|
||||
- 'Yes'
|
||||
- 'No'
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: opencvversion
|
||||
attributes:
|
||||
label: OpenCV version (if running on desktop)
|
||||
description:
|
||||
placeholder:
|
||||
validations:
|
||||
required: false
|
||||
- type: textarea
|
||||
id: what-happened
|
||||
attributes:
|
||||
label: Describe the problem
|
||||
description: Provide the exact sequence of commands / steps that you executed before running into the [problem](https://google.github.io/mediapipe/getting_started/getting_started.html)
|
||||
placeholder: Tell us what you see!
|
||||
value: "A bug happened!"
|
||||
render: shell
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: code-to-reproduce
|
||||
attributes:
|
||||
label: Complete Logs
|
||||
description: Include Complete Log information or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached
|
||||
placeholder: Tell us what you see!
|
||||
value:
|
||||
render: shell
|
||||
validations:
|
||||
required: true
|
64
.github/ISSUE_TEMPLATE/feature_request_issue_template.yaml
vendored
Normal file
64
.github/ISSUE_TEMPLATE/feature_request_issue_template.yaml
vendored
Normal file
|
@ -0,0 +1,64 @@
|
|||
name: Feature Request Issues
|
||||
description: Use this template for raising a feature request. If this doesn’t look right, choose a different type.
|
||||
labels: 'type:feature'
|
||||
body:
|
||||
- type: markdown
|
||||
id: linkmodel
|
||||
attributes:
|
||||
value: Please make sure that this is a feature request.
|
||||
- type: input
|
||||
id: solution
|
||||
attributes:
|
||||
label: MediaPipe Solution (you are using)
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: pgmlang
|
||||
attributes:
|
||||
label: Programming language
|
||||
placeholder: C++/typescript/Python/Objective C/Android Java
|
||||
validations:
|
||||
required: false
|
||||
- type: dropdown
|
||||
id: willingcon
|
||||
attributes:
|
||||
label: Are you willing to contribute it
|
||||
options:
|
||||
- 'Yes'
|
||||
- 'No'
|
||||
validations:
|
||||
required: false
|
||||
- type: textarea
|
||||
id: behaviour
|
||||
attributes:
|
||||
label: Describe the feature and the current behaviour/state
|
||||
render: shell
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: api_change
|
||||
attributes:
|
||||
label: Will this change the current API? How?
|
||||
render: shell
|
||||
validations:
|
||||
required: false
|
||||
- type: textarea
|
||||
id: benifit
|
||||
attributes:
|
||||
label: Who will benefit with this feature?
|
||||
validations:
|
||||
required: false
|
||||
- type: textarea
|
||||
id: use_case
|
||||
attributes:
|
||||
label: Please specify the use cases for this feature
|
||||
render: shell
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: info_other
|
||||
attributes:
|
||||
label: Any Other info
|
||||
render: shell
|
||||
validations:
|
||||
required: false
|
73
.github/ISSUE_TEMPLATE/model_maker_issue_template.yaml
vendored
Normal file
73
.github/ISSUE_TEMPLATE/model_maker_issue_template.yaml
vendored
Normal file
|
@ -0,0 +1,73 @@
|
|||
name: Model Maker Issues
|
||||
description: Use this template for assistance with using MediaPipe Model Maker (developers.google.com/mediapipe/solutions) to create custom on-device ML solutions.
|
||||
labels: 'type:modelmaker'
|
||||
body:
|
||||
- type: markdown
|
||||
id: linkmodel
|
||||
attributes:
|
||||
value: Please make sure that this is a [Model Maker](https://developers.google.com/mediapipe/solutions) issue
|
||||
- type: dropdown
|
||||
id: customcode_model
|
||||
attributes:
|
||||
label: Have I written custom code (as opposed to using a stock example script provided in MediaPipe)
|
||||
options:
|
||||
- 'Yes'
|
||||
- 'No'
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: os_model
|
||||
attributes:
|
||||
label: OS Platform and Distribution
|
||||
placeholder: e.g. Linux Ubuntu 16.04, Android 11, iOS 14.4
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: pythonver
|
||||
attributes:
|
||||
label: Python Version
|
||||
placeholder: e.g. 3.7, 3.8
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: modelmakerver
|
||||
attributes:
|
||||
label: MediaPipe Model Maker version
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: taskname
|
||||
attributes:
|
||||
label: Task name (e.g. Image classification, Gesture recognition etc.)
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: current_model
|
||||
attributes:
|
||||
label: Describe the actual behavior
|
||||
render: shell
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: expected_model
|
||||
attributes:
|
||||
label: Describe the expected behaviour
|
||||
render: shell
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: what-happened_model
|
||||
attributes:
|
||||
label: Standalone code/steps you may have used to try to get what you need
|
||||
description: If there is a problem, provide a reproducible test case that is the bare minimum necessary to generate the problem. If possible, please share a link to Colab, GitHub repo link or anything that we can use to reproduce the problem
|
||||
render: shell
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: other_info
|
||||
attributes:
|
||||
label: Other info / Complete Logs
|
||||
description: Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached
|
||||
render: shell
|
||||
validations:
|
||||
required: false
|
63
.github/ISSUE_TEMPLATE/studio_issue_template.yaml
vendored
Normal file
63
.github/ISSUE_TEMPLATE/studio_issue_template.yaml
vendored
Normal file
|
@ -0,0 +1,63 @@
|
|||
name: Studio Issues
|
||||
description: Use this template for assistance with the MediaPipe Studio application. If this doesn’t look right, choose a different type.
|
||||
labels: 'type:support'
|
||||
body:
|
||||
- type: markdown
|
||||
id: linkmodel
|
||||
attributes:
|
||||
value: Please make sure that this is a MediaPipe Studio issue.
|
||||
- type: input
|
||||
id: os_model
|
||||
attributes:
|
||||
label: OS Platform and Distribution
|
||||
placeholder: e.g. Linux Ubuntu 16.04, Android 11, iOS 14.4
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: browserver
|
||||
attributes:
|
||||
label: Browser and Version
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: hardware
|
||||
attributes:
|
||||
label: Any microphone or camera hardware
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: url
|
||||
attributes:
|
||||
label: URL that shows the problem
|
||||
validations:
|
||||
required: false
|
||||
- type: textarea
|
||||
id: current_model
|
||||
attributes:
|
||||
label: Describe the actual behavior
|
||||
render: shell
|
||||
validations:
|
||||
required: false
|
||||
- type: textarea
|
||||
id: expected_model
|
||||
attributes:
|
||||
label: Describe the expected behaviour
|
||||
render: shell
|
||||
validations:
|
||||
required: false
|
||||
- type: textarea
|
||||
id: what-happened_model
|
||||
attributes:
|
||||
label: Standalone code/steps you may have used to try to get what you need
|
||||
description: If there is a problem, provide a reproducible test case that is the bare minimum necessary to generate the problem. If possible, please share a link to Colab, GitHub repo link or anything that we can use to reproduce the problem
|
||||
render: shell
|
||||
validations:
|
||||
required: false
|
||||
- type: textarea
|
||||
id: other_info
|
||||
attributes:
|
||||
label: Other info / Complete Logs
|
||||
description: Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached
|
||||
render: shell
|
||||
validations:
|
||||
required: false
|
72
.github/ISSUE_TEMPLATE/task_issue_template.yaml
vendored
Normal file
72
.github/ISSUE_TEMPLATE/task_issue_template.yaml
vendored
Normal file
|
@ -0,0 +1,72 @@
|
|||
name: Task Issue
|
||||
description: Use this template for assistance with using MediaPipe Tasks (developers.google.com/mediapipe/solutions) to deploy on-device ML solutions (e.g. gesture recognition etc.) on supported platforms
|
||||
labels: 'type:task'
|
||||
body:
|
||||
- type: markdown
|
||||
id: linkmodel
|
||||
attributes:
|
||||
value: Please make sure that this is a [Tasks](https://developers.google.com/mediapipe/solutions) issue.
|
||||
- type: dropdown
|
||||
id: customcode_model
|
||||
attributes:
|
||||
label: Have I written custom code (as opposed to using a stock example script provided in MediaPipe)
|
||||
options:
|
||||
- 'Yes'
|
||||
- 'No'
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: os_model
|
||||
attributes:
|
||||
label: OS Platform and Distribution
|
||||
placeholder: e.g. Linux Ubuntu 16.04, Android 11, iOS 14.4
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: task-sdk-version
|
||||
attributes:
|
||||
label: MediaPipe Tasks SDK version
|
||||
validations:
|
||||
required: false
|
||||
- type: input
|
||||
id: taskname
|
||||
attributes:
|
||||
label: Task name (e.g. Image classification, Gesture recognition etc.)
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: programminglang
|
||||
attributes:
|
||||
label: Programming Language and version (e.g. C++, Python, Java)
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: current_model
|
||||
attributes:
|
||||
label: Describe the actual behavior
|
||||
render: shell
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: expected_model
|
||||
attributes:
|
||||
label: Describe the expected behaviour
|
||||
render: shell
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: what-happened_model
|
||||
attributes:
|
||||
label: Standalone code/steps you may have used to try to get what you need
|
||||
description: If there is a problem, provide a reproducible test case that is the bare minimum necessary to generate the problem. If possible, please share a link to Colab, GitHub repo link or anything that we can use to reproduce the problem
|
||||
render: shell
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: other_info
|
||||
attributes:
|
||||
label: Other info / Complete Logs
|
||||
description: Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached
|
||||
render: shell
|
||||
validations:
|
||||
required: false
|
13
WORKSPACE
13
WORKSPACE
|
@ -10,12 +10,11 @@ bind(
|
|||
|
||||
http_archive(
|
||||
name = "bazel_skylib",
|
||||
type = "tar.gz",
|
||||
sha256 = "74d544d96f4a5bb630d465ca8bbcfe231e3594e5aae57e1edbf17a6eb3ca2506",
|
||||
urls = [
|
||||
"https://github.com/bazelbuild/bazel-skylib/releases/download/1.0.3/bazel-skylib-1.0.3.tar.gz",
|
||||
"https://mirror.bazel.build/github.com/bazelbuild/bazel-skylib/releases/download/1.0.3/bazel-skylib-1.0.3.tar.gz",
|
||||
"https://storage.googleapis.com/mirror.tensorflow.org/github.com/bazelbuild/bazel-skylib/releases/download/1.3.0/bazel-skylib-1.3.0.tar.gz",
|
||||
"https://github.com/bazelbuild/bazel-skylib/releases/download/1.3.0/bazel-skylib-1.3.0.tar.gz",
|
||||
],
|
||||
sha256 = "1c531376ac7e5a180e0237938a2536de0c54d93f5c278634818e0efc952dd56c",
|
||||
)
|
||||
load("@bazel_skylib//:workspace.bzl", "bazel_skylib_workspace")
|
||||
bazel_skylib_workspace()
|
||||
|
@ -455,9 +454,9 @@ http_archive(
|
|||
)
|
||||
|
||||
# TensorFlow repo should always go after the other external dependencies.
|
||||
# TF on 2022-08-10.
|
||||
_TENSORFLOW_GIT_COMMIT = "af1d5bc4fbb66d9e6cc1cf89503014a99233583b"
|
||||
_TENSORFLOW_SHA256 = "f85a5443264fc58a12d136ca6a30774b5bc25ceaf7d114d97f252351b3c3a2cb"
|
||||
# TF on 2023-02-02.
|
||||
_TENSORFLOW_GIT_COMMIT = "581840e12c7762a3deef66b25a549218ca1e3983"
|
||||
_TENSORFLOW_SHA256 = "27f8f51e34b5065ac5411332eb4ad02f1d954257036d4863810d0c394d044bc9"
|
||||
http_archive(
|
||||
name = "org_tensorflow",
|
||||
urls = [
|
||||
|
|
|
@ -54,6 +54,25 @@ used for its improved inference speed. Please refer to the
|
|||
[model cards](./models.md#face_detection) for details. Default to `0` if not
|
||||
specified.
|
||||
|
||||
Note: Not available for JavaScript (use "model" instead).
|
||||
|
||||
#### model
|
||||
|
||||
A string value to indicate which model should be used. Use "short" to
|
||||
select a short-range model that works best for faces within 2 meters from the
|
||||
camera, and "full" for a full-range model best for faces within 5 meters. For
|
||||
the full-range option, a sparse model is used for its improved inference speed.
|
||||
Please refer to the model cards for details. Default to empty string.
|
||||
|
||||
Note: Valid only for JavaScript solution.
|
||||
|
||||
#### selfie_mode
|
||||
|
||||
A boolean value to indicate whether to flip the images/video frames
|
||||
horizontally or not. Default to `false`.
|
||||
|
||||
Note: Valid only for JavaScript solution.
|
||||
|
||||
#### min_detection_confidence
|
||||
|
||||
Minimum confidence value (`[0.0, 1.0]`) from the face detection model for the
|
||||
|
@ -146,9 +165,9 @@ Please first see general [introduction](../getting_started/javascript.md) on
|
|||
MediaPipe in JavaScript, then learn more in the companion [web demo](#resources)
|
||||
and the following usage example.
|
||||
|
||||
Supported configuration options:
|
||||
|
||||
* [modelSelection](#model_selection)
|
||||
Supported face detection options:
|
||||
* [selfieMode](#selfie_mode)
|
||||
* [model](#model)
|
||||
* [minDetectionConfidence](#min_detection_confidence)
|
||||
|
||||
```html
|
||||
|
@ -176,6 +195,7 @@ Supported configuration options:
|
|||
const videoElement = document.getElementsByClassName('input_video')[0];
|
||||
const canvasElement = document.getElementsByClassName('output_canvas')[0];
|
||||
const canvasCtx = canvasElement.getContext('2d');
|
||||
const drawingUtils = window;
|
||||
|
||||
function onResults(results) {
|
||||
// Draw the overlays.
|
||||
|
@ -199,7 +219,7 @@ const faceDetection = new FaceDetection({locateFile: (file) => {
|
|||
return `https://cdn.jsdelivr.net/npm/@mediapipe/face_detection@0.0/${file}`;
|
||||
}});
|
||||
faceDetection.setOptions({
|
||||
modelSelection: 0,
|
||||
model: 'short',
|
||||
minDetectionConfidence: 0.5
|
||||
});
|
||||
faceDetection.onResults(onResults);
|
||||
|
|
|
@ -203,6 +203,7 @@ class AudioToTensorCalculator : public Node {
|
|||
std::unique_ptr<audio_dsp::QResampler<float>> resampler_;
|
||||
Matrix sample_buffer_;
|
||||
int processed_buffer_cols_ = 0;
|
||||
double gain_ = 1.0;
|
||||
|
||||
// The internal state of the FFT library.
|
||||
PFFFT_Setup* fft_state_ = nullptr;
|
||||
|
@ -278,7 +279,9 @@ absl::Status AudioToTensorCalculator::Open(CalculatorContext* cc) {
|
|||
padding_samples_after_ = options.padding_samples_after();
|
||||
dft_tensor_format_ = options.dft_tensor_format();
|
||||
flush_mode_ = options.flush_mode();
|
||||
|
||||
if (options.has_volume_gain_db()) {
|
||||
gain_ = pow(10, options.volume_gain_db() / 20.0);
|
||||
}
|
||||
RET_CHECK(kAudioSampleRateIn(cc).IsConnected() ^
|
||||
!kAudioIn(cc).Header().IsEmpty())
|
||||
<< "Must either specify the time series header of the \"AUDIO\" stream "
|
||||
|
@ -344,6 +347,10 @@ absl::Status AudioToTensorCalculator::Process(CalculatorContext* cc) {
|
|||
const Matrix& input = channels_match ? input_frame
|
||||
// Mono mixdown.
|
||||
: input_frame.colwise().mean();
|
||||
if (gain_ != 1.0) {
|
||||
return stream_mode_ ? ProcessStreamingData(cc, input * gain_)
|
||||
: ProcessNonStreamingData(cc, input * gain_);
|
||||
}
|
||||
return stream_mode_ ? ProcessStreamingData(cc, input)
|
||||
: ProcessNonStreamingData(cc, input);
|
||||
}
|
||||
|
|
|
@ -81,4 +81,8 @@ message AudioToTensorCalculatorOptions {
|
|||
WITH_DC_AND_NYQUIST = 3;
|
||||
}
|
||||
optional DftTensorFormat dft_tensor_format = 11 [default = WITH_NYQUIST];
|
||||
|
||||
// The volume gain, measured in dB.
|
||||
// Scale the input audio amplitude by 10^(volume_gain_db/20).
|
||||
optional double volume_gain_db = 12;
|
||||
}
|
||||
|
|
|
@ -167,6 +167,7 @@ cc_test(
|
|||
"//mediapipe/framework:calculator_framework",
|
||||
"//mediapipe/framework:calculator_runner",
|
||||
"//mediapipe/framework/formats:detection_cc_proto",
|
||||
"//mediapipe/framework/formats:location_data_cc_proto",
|
||||
"//mediapipe/framework/port:gtest_main",
|
||||
"//mediapipe/framework/port:parse_text_proto",
|
||||
],
|
||||
|
@ -413,6 +414,7 @@ cc_library(
|
|||
":filter_detections_calculator_cc_proto",
|
||||
"//mediapipe/framework:calculator_framework",
|
||||
"//mediapipe/framework/formats:detection_cc_proto",
|
||||
"//mediapipe/framework/formats:location_data_cc_proto",
|
||||
"//mediapipe/framework/port:status",
|
||||
"@com_google_absl//absl/memory",
|
||||
],
|
||||
|
|
|
@ -21,11 +21,13 @@
|
|||
#include "mediapipe/calculators/util/filter_detections_calculator.pb.h"
|
||||
#include "mediapipe/framework/calculator_framework.h"
|
||||
#include "mediapipe/framework/formats/detection.pb.h"
|
||||
#include "mediapipe/framework/formats/location_data.pb.h"
|
||||
#include "mediapipe/framework/port/status.h"
|
||||
|
||||
namespace mediapipe {
|
||||
|
||||
const char kInputDetectionsTag[] = "INPUT_DETECTIONS";
|
||||
const char kImageSizeTag[] = "IMAGE_SIZE"; // <width, height>
|
||||
const char kOutputDetectionsTag[] = "OUTPUT_DETECTIONS";
|
||||
|
||||
//
|
||||
|
@ -41,6 +43,10 @@ class FilterDetectionsCalculator : public CalculatorBase {
|
|||
cc->Inputs().Tag(kInputDetectionsTag).Set<std::vector<Detection>>();
|
||||
cc->Outputs().Tag(kOutputDetectionsTag).Set<std::vector<Detection>>();
|
||||
|
||||
if (cc->Inputs().HasTag(kImageSizeTag)) {
|
||||
cc->Inputs().Tag(kImageSizeTag).Set<std::pair<int, int>>();
|
||||
}
|
||||
|
||||
return absl::OkStatus();
|
||||
}
|
||||
|
||||
|
@ -48,21 +54,51 @@ class FilterDetectionsCalculator : public CalculatorBase {
|
|||
cc->SetOffset(TimestampDiff(0));
|
||||
options_ = cc->Options<mediapipe::FilterDetectionsCalculatorOptions>();
|
||||
|
||||
if (options_.has_min_pixel_size() || options_.has_max_pixel_size()) {
|
||||
RET_CHECK(cc->Inputs().HasTag(kImageSizeTag));
|
||||
}
|
||||
|
||||
return absl::OkStatus();
|
||||
}
|
||||
|
||||
absl::Status Process(CalculatorContext* cc) final {
|
||||
const auto& input_detections =
|
||||
cc->Inputs().Tag(kInputDetectionsTag).Get<std::vector<Detection>>();
|
||||
|
||||
auto output_detections = absl::make_unique<std::vector<Detection>>();
|
||||
|
||||
int image_width = 0;
|
||||
int image_height = 0;
|
||||
if (cc->Inputs().HasTag(kImageSizeTag)) {
|
||||
std::tie(image_width, image_height) =
|
||||
cc->Inputs().Tag(kImageSizeTag).Get<std::pair<int, int>>();
|
||||
}
|
||||
|
||||
for (const Detection& detection : input_detections) {
|
||||
RET_CHECK_GT(detection.score_size(), 0);
|
||||
// Note: only score at index 0 supported.
|
||||
if (detection.score(0) >= options_.min_score()) {
|
||||
output_detections->push_back(detection);
|
||||
if (options_.has_min_score()) {
|
||||
RET_CHECK_GT(detection.score_size(), 0);
|
||||
// Note: only score at index 0 supported.
|
||||
if (detection.score(0) < options_.min_score()) {
|
||||
continue;
|
||||
}
|
||||
}
|
||||
// Matches rect_size in
|
||||
// mediapipe/calculators/util/rect_to_render_scale_calculator.cc
|
||||
const float rect_size =
|
||||
std::max(detection.location_data().relative_bounding_box().width() *
|
||||
image_width,
|
||||
detection.location_data().relative_bounding_box().height() *
|
||||
image_height);
|
||||
if (options_.has_min_pixel_size()) {
|
||||
if (rect_size < options_.min_pixel_size()) {
|
||||
continue;
|
||||
}
|
||||
}
|
||||
if (options_.has_max_pixel_size()) {
|
||||
if (rect_size > options_.max_pixel_size()) {
|
||||
continue;
|
||||
}
|
||||
}
|
||||
output_detections->push_back(detection);
|
||||
}
|
||||
|
||||
cc->Outputs()
|
||||
|
|
|
@ -25,4 +25,10 @@ message FilterDetectionsCalculatorOptions {
|
|||
|
||||
// Detections lower than this score get filtered out.
|
||||
optional float min_score = 1;
|
||||
|
||||
// Detections smaller than this size *in pixels* get filtered out.
|
||||
optional float min_pixel_size = 2;
|
||||
|
||||
// Detections larger than this size *in pixels* get filtered out.
|
||||
optional float max_pixel_size = 3;
|
||||
}
|
||||
|
|
|
@ -17,6 +17,7 @@
|
|||
#include "mediapipe/framework/calculator_framework.h"
|
||||
#include "mediapipe/framework/calculator_runner.h"
|
||||
#include "mediapipe/framework/formats/detection.pb.h"
|
||||
#include "mediapipe/framework/formats/location_data.pb.h"
|
||||
#include "mediapipe/framework/port/gmock.h"
|
||||
#include "mediapipe/framework/port/gtest.h"
|
||||
#include "mediapipe/framework/port/parse_text_proto.h"
|
||||
|
@ -27,8 +28,8 @@ namespace {
|
|||
|
||||
using ::testing::ElementsAre;
|
||||
|
||||
absl::Status RunGraph(std::vector<Detection>& input_detections,
|
||||
std::vector<Detection>* output_detections) {
|
||||
absl::Status RunScoreGraph(std::vector<Detection>& input_detections,
|
||||
std::vector<Detection>* output_detections) {
|
||||
CalculatorRunner runner(R"pb(
|
||||
calculator: "FilterDetectionsCalculator"
|
||||
input_stream: "INPUT_DETECTIONS:input_detections"
|
||||
|
@ -53,7 +54,7 @@ absl::Status RunGraph(std::vector<Detection>& input_detections,
|
|||
return absl::OkStatus();
|
||||
}
|
||||
|
||||
TEST(FilterDetectionsCalculatorTest, TestFilterDetections) {
|
||||
TEST(FilterDetectionsCalculatorTest, TestFilterDetectionsScore) {
|
||||
std::vector<Detection> input_detections;
|
||||
Detection d1, d2;
|
||||
d1.add_score(0.2);
|
||||
|
@ -62,12 +63,12 @@ TEST(FilterDetectionsCalculatorTest, TestFilterDetections) {
|
|||
input_detections.push_back(d2);
|
||||
|
||||
std::vector<Detection> output_detections;
|
||||
MP_EXPECT_OK(RunGraph(input_detections, &output_detections));
|
||||
MP_EXPECT_OK(RunScoreGraph(input_detections, &output_detections));
|
||||
|
||||
EXPECT_THAT(output_detections, ElementsAre(mediapipe::EqualsProto(d2)));
|
||||
}
|
||||
|
||||
TEST(FilterDetectionsCalculatorTest, TestFilterDetectionsMultiple) {
|
||||
TEST(FilterDetectionsCalculatorTest, TestFilterDetectionsScoreMultiple) {
|
||||
std::vector<Detection> input_detections;
|
||||
Detection d1, d2, d3, d4;
|
||||
d1.add_score(0.3);
|
||||
|
@ -80,7 +81,7 @@ TEST(FilterDetectionsCalculatorTest, TestFilterDetectionsMultiple) {
|
|||
input_detections.push_back(d4);
|
||||
|
||||
std::vector<Detection> output_detections;
|
||||
MP_EXPECT_OK(RunGraph(input_detections, &output_detections));
|
||||
MP_EXPECT_OK(RunScoreGraph(input_detections, &output_detections));
|
||||
|
||||
EXPECT_THAT(output_detections, ElementsAre(mediapipe::EqualsProto(d3),
|
||||
mediapipe::EqualsProto(d4)));
|
||||
|
@ -90,10 +91,69 @@ TEST(FilterDetectionsCalculatorTest, TestFilterDetectionsEmpty) {
|
|||
std::vector<Detection> input_detections;
|
||||
|
||||
std::vector<Detection> output_detections;
|
||||
MP_EXPECT_OK(RunGraph(input_detections, &output_detections));
|
||||
MP_EXPECT_OK(RunScoreGraph(input_detections, &output_detections));
|
||||
|
||||
EXPECT_EQ(output_detections.size(), 0);
|
||||
}
|
||||
|
||||
absl::Status RunSizeGraph(std::vector<Detection>& input_detections,
|
||||
std::pair<int, int> image_dimensions,
|
||||
std::vector<Detection>* output_detections) {
|
||||
CalculatorRunner runner(R"pb(
|
||||
calculator: "FilterDetectionsCalculator"
|
||||
input_stream: "INPUT_DETECTIONS:input_detections"
|
||||
input_stream: "IMAGE_SIZE:image_dimensions"
|
||||
output_stream: "OUTPUT_DETECTIONS:output_detections"
|
||||
options {
|
||||
[mediapipe.FilterDetectionsCalculatorOptions.ext] { min_pixel_size: 50 }
|
||||
}
|
||||
)pb");
|
||||
|
||||
const Timestamp input_timestamp = Timestamp(0);
|
||||
runner.MutableInputs()
|
||||
->Tag("INPUT_DETECTIONS")
|
||||
.packets.push_back(MakePacket<std::vector<Detection>>(input_detections)
|
||||
.At(input_timestamp));
|
||||
runner.MutableInputs()
|
||||
->Tag("IMAGE_SIZE")
|
||||
.packets.push_back(MakePacket<std::pair<int, int>>(image_dimensions)
|
||||
.At(input_timestamp));
|
||||
MP_RETURN_IF_ERROR(runner.Run()) << "Calculator run failed.";
|
||||
|
||||
const std::vector<Packet>& output_packets =
|
||||
runner.Outputs().Tag("OUTPUT_DETECTIONS").packets;
|
||||
RET_CHECK_EQ(output_packets.size(), 1);
|
||||
|
||||
*output_detections = output_packets[0].Get<std::vector<Detection>>();
|
||||
return absl::OkStatus();
|
||||
}
|
||||
|
||||
TEST(FilterDetectionsCalculatorTest, TestFilterDetectionsMinSize) {
|
||||
std::vector<Detection> input_detections;
|
||||
Detection d1, d2, d3, d4, d5;
|
||||
d1.mutable_location_data()->mutable_relative_bounding_box()->set_height(0.5);
|
||||
d1.mutable_location_data()->mutable_relative_bounding_box()->set_width(0.49);
|
||||
d2.mutable_location_data()->mutable_relative_bounding_box()->set_height(0.4);
|
||||
d2.mutable_location_data()->mutable_relative_bounding_box()->set_width(0.4);
|
||||
d3.mutable_location_data()->mutable_relative_bounding_box()->set_height(0.49);
|
||||
d3.mutable_location_data()->mutable_relative_bounding_box()->set_width(0.5);
|
||||
d4.mutable_location_data()->mutable_relative_bounding_box()->set_height(0.49);
|
||||
d4.mutable_location_data()->mutable_relative_bounding_box()->set_width(0.49);
|
||||
d5.mutable_location_data()->mutable_relative_bounding_box()->set_height(0.5);
|
||||
d5.mutable_location_data()->mutable_relative_bounding_box()->set_width(0.5);
|
||||
input_detections.push_back(d1);
|
||||
input_detections.push_back(d2);
|
||||
input_detections.push_back(d3);
|
||||
input_detections.push_back(d4);
|
||||
input_detections.push_back(d5);
|
||||
|
||||
std::vector<Detection> output_detections;
|
||||
MP_EXPECT_OK(RunSizeGraph(input_detections, {100, 100}, &output_detections));
|
||||
|
||||
EXPECT_THAT(output_detections, ElementsAre(mediapipe::EqualsProto(d1),
|
||||
mediapipe::EqualsProto(d3),
|
||||
mediapipe::EqualsProto(d5)));
|
||||
}
|
||||
|
||||
} // namespace
|
||||
} // namespace mediapipe
|
||||
|
|
|
@ -53,14 +53,10 @@
|
|||
#include "mediapipe/framework/port/status.h"
|
||||
#include "mediapipe/framework/scheduler.h"
|
||||
#include "mediapipe/framework/thread_pool_executor.pb.h"
|
||||
#include "mediapipe/gpu/gpu_service.h"
|
||||
|
||||
namespace mediapipe {
|
||||
|
||||
#if !MEDIAPIPE_DISABLE_GPU
|
||||
class GpuResources;
|
||||
struct GpuSharedData;
|
||||
#endif // !MEDIAPIPE_DISABLE_GPU
|
||||
|
||||
typedef absl::StatusOr<OutputStreamPoller> StatusOrPoller;
|
||||
|
||||
// The class representing a DAG of calculator nodes.
|
||||
|
|
|
@ -251,13 +251,8 @@ TEST_F(TextClassifierTest, BertLongPositive) {
|
|||
TextClassifierResult expected;
|
||||
std::vector<Category> categories;
|
||||
|
||||
// Predicted scores are slightly different across platforms.
|
||||
#ifdef __APPLE__
|
||||
categories.push_back(
|
||||
{/*index=*/1, /*score=*/0.974181, /*category_name=*/"positive"});
|
||||
categories.push_back(
|
||||
{/*index=*/0, /*score=*/0.025819, /*category_name=*/"negative"});
|
||||
#elif defined _WIN32
|
||||
// Predicted scores are slightly different on Windows.
|
||||
#ifdef _WIN32
|
||||
categories.push_back(
|
||||
{/*index=*/1, /*score=*/0.976686, /*category_name=*/"positive"});
|
||||
categories.push_back(
|
||||
|
@ -267,7 +262,7 @@ TEST_F(TextClassifierTest, BertLongPositive) {
|
|||
{/*index=*/1, /*score=*/0.985889, /*category_name=*/"positive"});
|
||||
categories.push_back(
|
||||
{/*index=*/0, /*score=*/0.014112, /*category_name=*/"negative"});
|
||||
#endif // __APPLE__
|
||||
#endif // _WIN32
|
||||
|
||||
expected.classifications.emplace_back(
|
||||
Classifications{/*categories=*/categories,
|
||||
|
|
|
@ -84,8 +84,8 @@ TEST_P(HandednessToMatrixCalculatorTest, OutputsCorrectResult) {
|
|||
INSTANTIATE_TEST_CASE_P(
|
||||
HandednessToMatrixCalculatorTests, HandednessToMatrixCalculatorTest,
|
||||
testing::ValuesIn<HandednessToMatrixCalculatorTestCase>(
|
||||
{{.test_name = "TestWithRightHand", .handedness = 0.01f},
|
||||
{.test_name = "TestWithLeftHand", .handedness = 0.99f}}),
|
||||
{{/* test_name= */ "TestWithRightHand", /* handedness= */ 0.01f},
|
||||
{/* test_name= */ "TestWithLeftHand", /* handedness= */ 0.99f}}),
|
||||
[](const testing::TestParamInfo<
|
||||
HandednessToMatrixCalculatorTest::ParamType>& info) {
|
||||
return info.param.test_name;
|
||||
|
|
33
mediapipe/tasks/ios/components/utils/BUILD
Normal file
33
mediapipe/tasks/ios/components/utils/BUILD
Normal file
|
@ -0,0 +1,33 @@
|
|||
# Copyright 2023 The MediaPipe Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
package(default_visibility = ["//mediapipe/tasks:internal"])
|
||||
|
||||
licenses(["notice"])
|
||||
|
||||
objc_library(
|
||||
name = "MPPCosineSimilarity",
|
||||
srcs = ["sources/MPPCosineSimilarity.mm"],
|
||||
hdrs = ["sources/MPPCosineSimilarity.h"],
|
||||
copts = [
|
||||
"-ObjC++",
|
||||
"-std=c++17",
|
||||
"-x objective-c++",
|
||||
],
|
||||
deps = [
|
||||
"//mediapipe/tasks/ios/common:MPPCommon",
|
||||
"//mediapipe/tasks/ios/common/utils:MPPCommonUtils",
|
||||
"//mediapipe/tasks/ios/components/containers:MPPEmbedding",
|
||||
],
|
||||
)
|
|
@ -0,0 +1,48 @@
|
|||
// Copyright 2023 The MediaPipe Authors. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
#import <Foundation/Foundation.h>
|
||||
|
||||
#import "mediapipe/tasks/ios/components/containers/sources/MPPEmbedding.h"
|
||||
|
||||
NS_ASSUME_NONNULL_BEGIN
|
||||
|
||||
/** Utility class for computing cosine similarity between `MPPEmbedding` objects. */
|
||||
NS_SWIFT_NAME(CosineSimilarity)
|
||||
|
||||
@interface MPPCosineSimilarity : NSObject
|
||||
|
||||
- (instancetype)init NS_UNAVAILABLE;
|
||||
|
||||
+ (instancetype)new NS_UNAVAILABLE;
|
||||
|
||||
/** Utility function to compute[cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity)
|
||||
* between two `MPPEmbedding` objects.
|
||||
*
|
||||
* @param embedding1 One of the two `MPPEmbedding`s between whom cosine similarity is to be
|
||||
* computed.
|
||||
* @param embedding2 One of the two `MPPEmbedding`s between whom cosine similarity is to be
|
||||
* computed.
|
||||
* @param error An optional error parameter populated when there is an error in calculating cosine
|
||||
* similarity between two embeddings.
|
||||
*
|
||||
* @return An `NSNumber` which holds the cosine similarity of type `double`.
|
||||
*/
|
||||
+ (nullable NSNumber *)computeBetweenEmbedding1:(MPPEmbedding *)embedding1
|
||||
andEmbedding2:(MPPEmbedding *)embedding2
|
||||
error:(NSError **)error;
|
||||
|
||||
@end
|
||||
|
||||
NS_ASSUME_NONNULL_END
|
|
@ -0,0 +1,88 @@
|
|||
// Copyright 2023 The MediaPipe Authors. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
#import "mediapipe/tasks/ios/components/utils/sources/MPPCosineSimilarity.h"
|
||||
|
||||
#include <math.h>
|
||||
|
||||
#import "mediapipe/tasks/ios/common/sources/MPPCommon.h"
|
||||
#import "mediapipe/tasks/ios/common/utils/sources/MPPCommonUtils.h"
|
||||
|
||||
@implementation MPPCosineSimilarity
|
||||
|
||||
+ (nullable NSNumber *)computeBetweenVector1:(NSArray<NSNumber *> *)u
|
||||
andVector2:(NSArray<NSNumber *> *)v
|
||||
isFloat:(BOOL)isFloat
|
||||
error:(NSError **)error {
|
||||
if (u.count != v.count) {
|
||||
[MPPCommonUtils
|
||||
createCustomError:error
|
||||
withCode:MPPTasksErrorCodeInvalidArgumentError
|
||||
description:[NSString stringWithFormat:@"Cannot compute cosine similarity between "
|
||||
@"embeddings of different sizes (%lu vs %lu",
|
||||
static_cast<u_long>(u.count),
|
||||
static_cast<u_long>(v.count)]];
|
||||
return nil;
|
||||
}
|
||||
|
||||
__block double dotProduct = 0.0;
|
||||
__block double normU = 0.0;
|
||||
__block double normV = 0.0;
|
||||
|
||||
[u enumerateObjectsUsingBlock:^(NSNumber *num, NSUInteger idx, BOOL *stop) {
|
||||
double uVal = 0.0;
|
||||
double vVal = 0.0;
|
||||
|
||||
if (isFloat) {
|
||||
uVal = num.floatValue;
|
||||
vVal = v[idx].floatValue;
|
||||
} else {
|
||||
uVal = num.charValue;
|
||||
vVal = v[idx].charValue;
|
||||
}
|
||||
|
||||
dotProduct += uVal * vVal;
|
||||
normU += uVal * uVal;
|
||||
normV += vVal * vVal;
|
||||
}];
|
||||
|
||||
return [NSNumber numberWithDouble:dotProduct / sqrt(normU * normV)];
|
||||
}
|
||||
|
||||
+ (nullable NSNumber *)computeBetweenEmbedding1:(MPPEmbedding *)embedding1
|
||||
andEmbedding2:(MPPEmbedding *)embedding2
|
||||
error:(NSError **)error {
|
||||
if (embedding1.floatEmbedding && embedding2.floatEmbedding) {
|
||||
return [MPPCosineSimilarity computeBetweenVector1:embedding1.floatEmbedding
|
||||
andVector2:embedding2.floatEmbedding
|
||||
isFloat:YES
|
||||
error:error];
|
||||
}
|
||||
|
||||
if (embedding1.quantizedEmbedding && embedding2.quantizedEmbedding) {
|
||||
return [MPPCosineSimilarity computeBetweenVector1:embedding1.quantizedEmbedding
|
||||
andVector2:embedding2.quantizedEmbedding
|
||||
isFloat:NO
|
||||
error:error];
|
||||
}
|
||||
|
||||
[MPPCommonUtils
|
||||
createCustomError:error
|
||||
withCode:MPPTasksErrorCodeInvalidArgumentError
|
||||
description:
|
||||
@"Cannot compute cosine similarity between quantized and float embeddings."];
|
||||
return nil;
|
||||
}
|
||||
|
||||
@end
|
80
mediapipe/tasks/ios/test/text/text_embedder/BUILD
Normal file
80
mediapipe/tasks/ios/test/text/text_embedder/BUILD
Normal file
|
@ -0,0 +1,80 @@
|
|||
load(
|
||||
"@build_bazel_rules_apple//apple:ios.bzl",
|
||||
"ios_unit_test",
|
||||
)
|
||||
load(
|
||||
"@build_bazel_rules_swift//swift:swift.bzl",
|
||||
"swift_library",
|
||||
)
|
||||
load(
|
||||
"//mediapipe/tasks:ios/ios.bzl",
|
||||
"MPP_TASK_MINIMUM_OS_VERSION",
|
||||
)
|
||||
load(
|
||||
"@org_tensorflow//tensorflow/lite:special_rules.bzl",
|
||||
"tflite_ios_lab_runner",
|
||||
)
|
||||
|
||||
package(default_visibility = ["//mediapipe/tasks:internal"])
|
||||
|
||||
licenses(["notice"])
|
||||
|
||||
# Default tags for filtering iOS targets. Targets are restricted to Apple platforms.
|
||||
TFL_DEFAULT_TAGS = [
|
||||
"apple",
|
||||
]
|
||||
|
||||
# Following sanitizer tests are not supported by iOS test targets.
|
||||
TFL_DISABLED_SANITIZER_TAGS = [
|
||||
"noasan",
|
||||
"nomsan",
|
||||
"notsan",
|
||||
]
|
||||
|
||||
objc_library(
|
||||
name = "MPPTextEmbedderObjcTestLibrary",
|
||||
testonly = 1,
|
||||
srcs = ["MPPTextEmbedderTests.m"],
|
||||
data = [
|
||||
"//mediapipe/tasks/testdata/text:mobilebert_embedding_model",
|
||||
"//mediapipe/tasks/testdata/text:regex_embedding_with_metadata",
|
||||
],
|
||||
deps = [
|
||||
"//mediapipe/tasks/ios/common:MPPCommon",
|
||||
"//mediapipe/tasks/ios/text/text_embedder:MPPTextEmbedder",
|
||||
],
|
||||
)
|
||||
|
||||
ios_unit_test(
|
||||
name = "MPPTextEmbedderObjcTest",
|
||||
minimum_os_version = MPP_TASK_MINIMUM_OS_VERSION,
|
||||
runner = tflite_ios_lab_runner("IOS_LATEST"),
|
||||
deps = [
|
||||
":MPPTextEmbedderObjcTestLibrary",
|
||||
],
|
||||
)
|
||||
|
||||
swift_library(
|
||||
name = "MPPTextEmbedderSwiftTestLibrary",
|
||||
testonly = 1,
|
||||
srcs = ["TextEmbedderTests.swift"],
|
||||
data = [
|
||||
"//mediapipe/tasks/testdata/text:mobilebert_embedding_model",
|
||||
"//mediapipe/tasks/testdata/text:regex_embedding_with_metadata",
|
||||
],
|
||||
tags = TFL_DEFAULT_TAGS,
|
||||
deps = [
|
||||
"//mediapipe/tasks/ios/common:MPPCommon",
|
||||
"//mediapipe/tasks/ios/text/text_embedder:MPPTextEmbedder",
|
||||
],
|
||||
)
|
||||
|
||||
ios_unit_test(
|
||||
name = "MPPTextEmbedderSwiftTest",
|
||||
minimum_os_version = MPP_TASK_MINIMUM_OS_VERSION,
|
||||
runner = tflite_ios_lab_runner("IOS_LATEST"),
|
||||
tags = TFL_DEFAULT_TAGS + TFL_DISABLED_SANITIZER_TAGS,
|
||||
deps = [
|
||||
":MPPTextEmbedderSwiftTestLibrary",
|
||||
],
|
||||
)
|
|
@ -0,0 +1,246 @@
|
|||
// Copyright 2023 The MediaPipe Authors.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
#import <XCTest/XCTest.h>
|
||||
|
||||
#import "mediapipe/tasks/ios/common/sources/MPPCommon.h"
|
||||
#import "mediapipe/tasks/ios/text/text_embedder/sources/MPPTextEmbedder.h"
|
||||
|
||||
static NSString *const kBertTextEmbedderModelName = @"mobilebert_embedding_with_metadata";
|
||||
static NSString *const kRegexTextEmbedderModelName = @"regex_one_embedding_with_metadata";
|
||||
static NSString *const kText1 = @"it's a charming and often affecting journey";
|
||||
static NSString *const kText2 = @"what a great and fantastic trip";
|
||||
static NSString *const kExpectedErrorDomain = @"com.google.mediapipe.tasks";
|
||||
static const float kFloatDiffTolerance = 1e-4;
|
||||
static const float kSimilarityDiffTolerance = 1e-4;
|
||||
|
||||
#define AssertEqualErrors(error, expectedError) \
|
||||
XCTAssertNotNil(error); \
|
||||
XCTAssertEqualObjects(error.domain, expectedError.domain); \
|
||||
XCTAssertEqual(error.code, expectedError.code); \
|
||||
XCTAssertNotEqual( \
|
||||
[error.localizedDescription rangeOfString:expectedError.localizedDescription].location, \
|
||||
NSNotFound)
|
||||
|
||||
#define AssertTextEmbedderResultHasOneEmbedding(textEmbedderResult) \
|
||||
XCTAssertNotNil(textEmbedderResult); \
|
||||
XCTAssertNotNil(textEmbedderResult.embeddingResult); \
|
||||
XCTAssertEqual(textEmbedderResult.embeddingResult.embeddings.count, 1);
|
||||
|
||||
#define AssertEmbeddingType(embedding, quantized) \
|
||||
if (quantized) { \
|
||||
XCTAssertNil(embedding.floatEmbedding); \
|
||||
XCTAssertNotNil(embedding.quantizedEmbedding); \
|
||||
} else { \
|
||||
XCTAssertNotNil(embedding.floatEmbedding); \
|
||||
XCTAssertNil(embedding.quantizedEmbedding); \
|
||||
}
|
||||
|
||||
#define AssertEmbeddingHasExpectedValues(embedding, expectedLength, expectedFirstValue, quantize) \
|
||||
XCTAssertEqual(embedding.count, expectedLength); \
|
||||
if (quantize) { \
|
||||
XCTAssertEqual(embedding[0].charValue, expectedFirstValue); \
|
||||
} else { \
|
||||
XCTAssertEqualWithAccuracy(embedding[0].floatValue, expectedFirstValue, kFloatDiffTolerance); \
|
||||
}
|
||||
|
||||
@interface MPPTextEmbedderTests : XCTestCase
|
||||
@end
|
||||
|
||||
@implementation MPPTextEmbedderTests
|
||||
|
||||
- (NSString *)filePathWithName:(NSString *)fileName extension:(NSString *)extension {
|
||||
return [[NSBundle bundleForClass:self.class] pathForResource:fileName
|
||||
ofType:extension];
|
||||
}
|
||||
|
||||
- (MPPTextEmbedder *)textEmbedderFromModelFileWithName:(NSString *)modelName {
|
||||
NSString *modelPath = [self filePathWithName:modelName extension:@"tflite"];
|
||||
|
||||
NSError *error = nil;
|
||||
MPPTextEmbedder *textEmbedder = [[MPPTextEmbedder alloc] initWithModelPath:modelPath
|
||||
error:&error];
|
||||
|
||||
XCTAssertNotNil(textEmbedder);
|
||||
|
||||
return textEmbedder;
|
||||
}
|
||||
|
||||
- (MPPTextEmbedderOptions *)textEmbedderOptionsWithModelName:(NSString *)modelName {
|
||||
NSString *modelPath = [self filePathWithName:modelName extension:@"tflite"];
|
||||
MPPTextEmbedderOptions *textEmbedderOptions = [[MPPTextEmbedderOptions alloc] init];
|
||||
textEmbedderOptions.baseOptions.modelAssetPath = modelPath;
|
||||
|
||||
return textEmbedderOptions;
|
||||
}
|
||||
|
||||
- (MPPEmbedding *)assertFloatEmbeddingResultsOfEmbedText:(NSString *)text
|
||||
usingTextEmbedder:(MPPTextEmbedder *)textEmbedder
|
||||
hasCount:(NSUInteger)embeddingCount
|
||||
firstValue:(float)firstValue {
|
||||
MPPTextEmbedderResult *embedderResult = [textEmbedder embedText:text error:nil];
|
||||
AssertTextEmbedderResultHasOneEmbedding(embedderResult);
|
||||
|
||||
AssertEmbeddingType(embedderResult.embeddingResult.embeddings[0], // embedding
|
||||
NO // quantized
|
||||
);
|
||||
|
||||
AssertEmbeddingHasExpectedValues(
|
||||
embedderResult.embeddingResult.embeddings[0].floatEmbedding, // embedding
|
||||
embeddingCount, // expectedLength
|
||||
firstValue, // expectedFirstValue
|
||||
NO // quantize
|
||||
);
|
||||
|
||||
return embedderResult.embeddingResult.embeddings[0];
|
||||
}
|
||||
|
||||
- (MPPEmbedding *)assertQuantizedEmbeddingResultsOfEmbedText:(NSString *)text
|
||||
usingTextEmbedder:(MPPTextEmbedder *)textEmbedder
|
||||
hasCount:(NSUInteger)embeddingCount
|
||||
firstValue:(char)firstValue {
|
||||
MPPTextEmbedderResult *embedderResult = [textEmbedder embedText:text error:nil];
|
||||
AssertTextEmbedderResultHasOneEmbedding(embedderResult);
|
||||
|
||||
AssertEmbeddingType(embedderResult.embeddingResult.embeddings[0], // embedding
|
||||
YES // quantized
|
||||
);
|
||||
|
||||
AssertEmbeddingHasExpectedValues(
|
||||
embedderResult.embeddingResult.embeddings[0].quantizedEmbedding, // embedding
|
||||
embeddingCount, // expectedLength
|
||||
firstValue, // expectedFirstValue
|
||||
YES // quantize
|
||||
);
|
||||
|
||||
return embedderResult.embeddingResult.embeddings[0];
|
||||
}
|
||||
|
||||
- (void)testCreateTextEmbedderFailsWithMissingModelPath {
|
||||
NSString *modelPath = [self filePathWithName:@"" extension:@""];
|
||||
|
||||
NSError *error = nil;
|
||||
MPPTextEmbedder *textEmbedder = [[MPPTextEmbedder alloc] initWithModelPath:modelPath
|
||||
error:&error];
|
||||
XCTAssertNil(textEmbedder);
|
||||
|
||||
NSError *expectedError = [NSError
|
||||
errorWithDomain:kExpectedErrorDomain
|
||||
code:MPPTasksErrorCodeInvalidArgumentError
|
||||
userInfo:@{
|
||||
NSLocalizedDescriptionKey :
|
||||
@"INVALID_ARGUMENT: ExternalFile must specify at least one of 'file_content', "
|
||||
@"'file_name', 'file_pointer_meta' or 'file_descriptor_meta'."
|
||||
}];
|
||||
AssertEqualErrors(error, // error
|
||||
expectedError // expectedError
|
||||
);
|
||||
}
|
||||
|
||||
- (void)testEmbedWithBertSucceeds {
|
||||
MPPTextEmbedder *textEmbedder =
|
||||
[self textEmbedderFromModelFileWithName:kBertTextEmbedderModelName];
|
||||
|
||||
MPPEmbedding *embedding1 =
|
||||
[self assertFloatEmbeddingResultsOfEmbedText:kText1
|
||||
usingTextEmbedder:textEmbedder
|
||||
hasCount:512
|
||||
firstValue:21.214869f];
|
||||
|
||||
MPPEmbedding *embedding2 = [self assertFloatEmbeddingResultsOfEmbedText:kText2
|
||||
usingTextEmbedder:textEmbedder
|
||||
hasCount:512
|
||||
firstValue:22.626251f];
|
||||
NSNumber *cosineSimilarity = [MPPTextEmbedder cosineSimilarityBetweenEmbedding1:embedding1
|
||||
andEmbedding2:embedding2
|
||||
error:nil];
|
||||
|
||||
XCTAssertEqualWithAccuracy(cosineSimilarity.doubleValue, 0.971417490189,
|
||||
kSimilarityDiffTolerance);
|
||||
}
|
||||
|
||||
- (void)testEmbedWithRegexSucceeds {
|
||||
MPPTextEmbedder *textEmbedder =
|
||||
[self textEmbedderFromModelFileWithName:kRegexTextEmbedderModelName];
|
||||
|
||||
MPPEmbedding *embedding1 = [self assertFloatEmbeddingResultsOfEmbedText:kText1
|
||||
usingTextEmbedder:textEmbedder
|
||||
hasCount:16
|
||||
firstValue:0.030935612f];
|
||||
|
||||
MPPEmbedding *embedding2 = [self assertFloatEmbeddingResultsOfEmbedText:kText2
|
||||
usingTextEmbedder:textEmbedder
|
||||
hasCount:16
|
||||
firstValue:0.0312863f];
|
||||
|
||||
NSNumber *cosineSimilarity = [MPPTextEmbedder cosineSimilarityBetweenEmbedding1:embedding1
|
||||
andEmbedding2:embedding2
|
||||
error:nil];
|
||||
|
||||
XCTAssertEqualWithAccuracy(cosineSimilarity.doubleValue, 0.999937f, kSimilarityDiffTolerance);
|
||||
}
|
||||
|
||||
- (void)testEmbedWithBertAndDifferentThemesSucceeds {
|
||||
MPPTextEmbedder *textEmbedder =
|
||||
[self textEmbedderFromModelFileWithName:kBertTextEmbedderModelName];
|
||||
|
||||
MPPEmbedding *embedding1 =
|
||||
[self assertFloatEmbeddingResultsOfEmbedText:
|
||||
@"When you go to this restaurant, they hold the pancake upside-down before they "
|
||||
@"hand it to you. It's a great gimmick."
|
||||
usingTextEmbedder:textEmbedder
|
||||
hasCount:512
|
||||
firstValue:43.1663];
|
||||
|
||||
MPPEmbedding *embedding2 =
|
||||
[self assertFloatEmbeddingResultsOfEmbedText:
|
||||
@"Let's make a plan to steal the declaration of independence."
|
||||
usingTextEmbedder:textEmbedder
|
||||
hasCount:512
|
||||
firstValue:48.0898];
|
||||
|
||||
NSNumber *cosineSimilarity = [MPPTextEmbedder cosineSimilarityBetweenEmbedding1:embedding1
|
||||
andEmbedding2:embedding2
|
||||
error:nil];
|
||||
|
||||
// TODO: The similarity should likely be lower
|
||||
XCTAssertEqualWithAccuracy(cosineSimilarity.doubleValue, 0.98151f, kSimilarityDiffTolerance);
|
||||
}
|
||||
|
||||
- (void)testEmbedWithQuantizeSucceeds {
|
||||
MPPTextEmbedderOptions *options =
|
||||
[self textEmbedderOptionsWithModelName:kBertTextEmbedderModelName];
|
||||
options.quantize = YES;
|
||||
|
||||
MPPTextEmbedder *textEmbedder = [[MPPTextEmbedder alloc] initWithOptions:options error:nil];
|
||||
XCTAssertNotNil(textEmbedder);
|
||||
|
||||
MPPEmbedding *embedding1 = [self
|
||||
assertQuantizedEmbeddingResultsOfEmbedText:@"it's a charming and often affecting journey"
|
||||
usingTextEmbedder:textEmbedder
|
||||
hasCount:512
|
||||
firstValue:127];
|
||||
|
||||
MPPEmbedding *embedding2 =
|
||||
[self assertQuantizedEmbeddingResultsOfEmbedText:@"what a great and fantastic trip"
|
||||
usingTextEmbedder:textEmbedder
|
||||
hasCount:512
|
||||
firstValue:127];
|
||||
NSNumber *cosineSimilarity = [MPPTextEmbedder cosineSimilarityBetweenEmbedding1:embedding1
|
||||
andEmbedding2:embedding2
|
||||
error:nil];
|
||||
XCTAssertEqualWithAccuracy(cosineSimilarity.doubleValue, 0.88164f, kSimilarityDiffTolerance);
|
||||
}
|
||||
|
||||
@end
|
|
@ -0,0 +1,121 @@
|
|||
// Copyright 2023 The MediaPipe Authors.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import MPPCommon
|
||||
import XCTest
|
||||
|
||||
@testable import MPPTextEmbedder
|
||||
|
||||
/// These tests are only for validating the Swift function signatures of the TextEmbedder.
|
||||
/// Objective C tests of the TextEmbedder provide more coverage with unit tests for
|
||||
/// different models and text embedder options. They can be found here:
|
||||
/// /mediapipe/tasks/ios/test/text/text_embedder/MPPTextEmbedderTests.m
|
||||
|
||||
class TextEmbedderTests: XCTestCase {
|
||||
|
||||
static let bundle = Bundle(for: TextEmbedderTests.self)
|
||||
|
||||
static let bertModelPath = bundle.path(
|
||||
forResource: "mobilebert_embedding_with_metadata",
|
||||
ofType: "tflite")
|
||||
|
||||
static let text1 = "it's a charming and often affecting journey"
|
||||
|
||||
static let text2 = "what a great and fantastic trip"
|
||||
|
||||
static let floatDiffTolerance: Float = 1e-4
|
||||
|
||||
static let doubleDiffTolerance: Double = 1e-4
|
||||
|
||||
func assertEqualErrorDescriptions(
|
||||
_ error: Error, expectedLocalizedDescription: String
|
||||
) {
|
||||
XCTAssertEqual(
|
||||
error.localizedDescription,
|
||||
expectedLocalizedDescription)
|
||||
}
|
||||
|
||||
func assertTextEmbedderResultHasOneEmbedding(
|
||||
_ textEmbedderResult: TextEmbedderResult
|
||||
) {
|
||||
XCTAssertEqual(textEmbedderResult.embeddingResult.embeddings.count, 1)
|
||||
}
|
||||
|
||||
func assertEmbeddingIsFloat(
|
||||
_ embedding: Embedding
|
||||
) {
|
||||
XCTAssertNil(embedding.quantizedEmbedding)
|
||||
XCTAssertNotNil(embedding.floatEmbedding)
|
||||
}
|
||||
|
||||
func assertEmbedding(
|
||||
_ floatEmbedding: [NSNumber],
|
||||
hasCount embeddingCount: Int,
|
||||
hasFirstValue firstValue: Float
|
||||
) {
|
||||
XCTAssertEqual(floatEmbedding.count, embeddingCount)
|
||||
XCTAssertEqual(
|
||||
floatEmbedding[0].floatValue,
|
||||
firstValue,
|
||||
accuracy:
|
||||
TextEmbedderTests.floatDiffTolerance)
|
||||
}
|
||||
|
||||
func assertFloatEmbeddingResultsForEmbed(
|
||||
text: String,
|
||||
using textEmbedder: TextEmbedder,
|
||||
hasCount embeddingCount: Int,
|
||||
hasFirstValue firstValue: Float
|
||||
) throws -> Embedding {
|
||||
let textEmbedderResult =
|
||||
try XCTUnwrap(
|
||||
textEmbedder.embed(text: text))
|
||||
assertTextEmbedderResultHasOneEmbedding(textEmbedderResult)
|
||||
assertEmbeddingIsFloat(textEmbedderResult.embeddingResult.embeddings[0])
|
||||
assertEmbedding(
|
||||
textEmbedderResult.embeddingResult.embeddings[0].floatEmbedding!,
|
||||
hasCount: embeddingCount,
|
||||
hasFirstValue: firstValue)
|
||||
|
||||
return textEmbedderResult.embeddingResult.embeddings[0]
|
||||
}
|
||||
|
||||
func testEmbedWithBertSucceeds() throws {
|
||||
|
||||
let modelPath = try XCTUnwrap(TextEmbedderTests.bertModelPath)
|
||||
let textEmbedder = try XCTUnwrap(TextEmbedder(modelPath: modelPath))
|
||||
|
||||
let embedding1 = try assertFloatEmbeddingResultsForEmbed(
|
||||
text: TextEmbedderTests.text1,
|
||||
using: textEmbedder,
|
||||
hasCount: 512,
|
||||
hasFirstValue: 21.214869)
|
||||
|
||||
let embedding2 = try assertFloatEmbeddingResultsForEmbed(
|
||||
text: TextEmbedderTests.text2,
|
||||
using: textEmbedder,
|
||||
hasCount: 512,
|
||||
hasFirstValue: 22.626251)
|
||||
|
||||
let cosineSimilarity = try XCTUnwrap(
|
||||
TextEmbedder.cosineSimilarity(
|
||||
embedding1: embedding1,
|
||||
embedding2: embedding2))
|
||||
|
||||
XCTAssertEqual(
|
||||
cosineSimilarity.doubleValue,
|
||||
0.97141,
|
||||
accuracy: TextEmbedderTests.doubleDiffTolerance)
|
||||
}
|
||||
}
|
|
@ -49,6 +49,7 @@ objc_library(
|
|||
"//mediapipe/tasks/cc/text/text_embedder:text_embedder_graph",
|
||||
"//mediapipe/tasks/ios/common/utils:MPPCommonUtils",
|
||||
"//mediapipe/tasks/ios/common/utils:NSStringHelpers",
|
||||
"//mediapipe/tasks/ios/components/utils:MPPCosineSimilarity",
|
||||
"//mediapipe/tasks/ios/core:MPPTaskInfo",
|
||||
"//mediapipe/tasks/ios/core:MPPTaskOptions",
|
||||
"//mediapipe/tasks/ios/core:MPPTextPacketCreator",
|
||||
|
|
|
@ -86,6 +86,24 @@ NS_SWIFT_NAME(TextEmbedder)
|
|||
|
||||
- (instancetype)init NS_UNAVAILABLE;
|
||||
|
||||
/**
|
||||
* Utility function to compute[cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity)
|
||||
* between two `MPPEmbedding` objects.
|
||||
*
|
||||
* @param embedding1 One of the two `MPPEmbedding`s between whom cosine similarity is to be
|
||||
* computed.
|
||||
* @param embedding2 One of the two `MPPEmbedding`s between whom cosine similarity is to be
|
||||
* computed.
|
||||
* @param error An optional error parameter populated when there is an error in calculating cosine
|
||||
* similarity between two embeddings.
|
||||
*
|
||||
* @return An `NSNumber` which holds the cosine similarity of type `double`.
|
||||
*/
|
||||
+ (nullable NSNumber *)cosineSimilarityBetweenEmbedding1:(MPPEmbedding *)embedding1
|
||||
andEmbedding2:(MPPEmbedding *)embedding2
|
||||
error:(NSError **)error
|
||||
NS_SWIFT_NAME(cosineSimilarity(embedding1:embedding2:));
|
||||
|
||||
+ (instancetype)new NS_UNAVAILABLE;
|
||||
|
||||
@end
|
||||
|
|
|
@ -16,6 +16,7 @@
|
|||
|
||||
#import "mediapipe/tasks/ios/common/utils/sources/MPPCommonUtils.h"
|
||||
#import "mediapipe/tasks/ios/common/utils/sources/NSString+Helpers.h"
|
||||
#import "mediapipe/tasks/ios/components/utils/sources/MPPCosineSimilarity.h"
|
||||
#import "mediapipe/tasks/ios/core/sources/MPPTaskInfo.h"
|
||||
#import "mediapipe/tasks/ios/core/sources/MPPTextPacketCreator.h"
|
||||
#import "mediapipe/tasks/ios/text/core/sources/MPPTextTaskRunner.h"
|
||||
|
@ -93,4 +94,12 @@ static NSString *const kTaskGraphName = @"mediapipe.tasks.text.text_embedder.Tex
|
|||
.value()[kEmbeddingsOutStreamName.cppString]];
|
||||
}
|
||||
|
||||
+ (nullable NSNumber *)cosineSimilarityBetweenEmbedding1:(MPPEmbedding *)embedding1
|
||||
andEmbedding2:(MPPEmbedding *)embedding2
|
||||
error:(NSError **)error {
|
||||
return [MPPCosineSimilarity computeBetweenEmbedding1:embedding1
|
||||
andEmbedding2:embedding2
|
||||
error:error];
|
||||
}
|
||||
|
||||
@end
|
||||
|
|
|
@ -41,10 +41,10 @@ NS_SWIFT_NAME(MPImage)
|
|||
|
||||
/**
|
||||
* The display orientation of the image. If `imageSourceType` is `MPPImageSourceTypeImage`, the
|
||||
* default value is `image.imageOrientation`; otherwise the default value is `UIImageOrientationUp`.
|
||||
* If the `MPPImage` is being used as input for any MediaPipe vision tasks and is set to any
|
||||
* orientation other than `UIImageOrientationUp`, inference will be performed on a rotated copy of
|
||||
* the image according to the orientation.
|
||||
* default value is `image.imageOrientation`; otherwise the default value is
|
||||
* `UIImageOrientationUp`. If the `MPPImage` is being used as input for any MediaPipe vision tasks
|
||||
* and is set to any orientation other than `UIImageOrientationUp`, inference will be performed on
|
||||
* a rotated copy of the image according to the orientation.
|
||||
*/
|
||||
@property(nonatomic, readonly) UIImageOrientation orientation;
|
||||
|
||||
|
@ -63,9 +63,9 @@ NS_SWIFT_NAME(MPImage)
|
|||
/**
|
||||
* Initializes an `MPPImage` object with the given `UIImage`.
|
||||
* The orientation of the newly created `MPPImage` will be `UIImageOrientationUp`.
|
||||
* Hence, if this image is used as input for any MediaPipe vision tasks, inference will be performed
|
||||
* on the it without any rotation. To create an `MPPImage` with a different orientation, please
|
||||
* use `[MPPImage initWithImage:orientation:error:]`.
|
||||
* Hence, if this image is used as input for any MediaPipe vision tasks, inference will be
|
||||
* performed on the it without any rotation. To create an `MPPImage` with a different orientation,
|
||||
* please use `[MPPImage initWithImage:orientation:error:]`.
|
||||
*
|
||||
* @param image The image to use as the source. Its `CGImage` property must not be `NULL`.
|
||||
* @param error An optional error parameter populated when there is an error in initializing the
|
||||
|
@ -84,12 +84,12 @@ NS_SWIFT_NAME(MPImage)
|
|||
*
|
||||
* @param image The image to use as the source. Its `CGImage` property must not be `NULL`.
|
||||
* @param orientation The display orientation of the image. This will be stored in the property
|
||||
* `orientation`. `MPPImage`.
|
||||
* `orientation`. `MPPImage`.
|
||||
* @param error An optional error parameter populated when there is an error in initializing the
|
||||
* `MPPImage`.
|
||||
* `MPPImage`.
|
||||
*
|
||||
* @return A new `MPPImage` instance with the given image as the source. `nil` if the given
|
||||
* `image` is `nil` or invalid.
|
||||
* `image` is `nil` or invalid.
|
||||
*/
|
||||
- (nullable instancetype)initWithUIImage:(UIImage *)image
|
||||
orientation:(UIImageOrientation)orientation
|
||||
|
@ -99,14 +99,14 @@ NS_SWIFT_NAME(MPImage)
|
|||
* Initializes an `MPPImage` object with the given pixel buffer.
|
||||
*
|
||||
* The orientation of the newly created `MPPImage` will be `UIImageOrientationUp`.
|
||||
* Hence, if this image is used as input for any MediaPipe vision tasks, inference will be performed
|
||||
* on the it without any rotation. To create an `MPPImage` with a different orientation, please
|
||||
* use `[MPPImage initWithPixelBuffer:orientation:error:]`.
|
||||
* Hence, if this image is used as input for any MediaPipe vision tasks, inference will be
|
||||
* performed on the it without any rotation. To create an `MPPImage` with a different
|
||||
* orientation, please use `[MPPImage initWithPixelBuffer:orientation:error:]`.
|
||||
*
|
||||
* @param pixelBuffer The pixel buffer to use as the source. It will be retained by the new
|
||||
* `MPPImage` instance for the duration of its lifecycle.
|
||||
* @param error An optional error parameter populated when there is an error in initializing the
|
||||
* `MPPImage`.
|
||||
* `MPPImage`.
|
||||
*
|
||||
* @return A new `MPPImage` instance with the given pixel buffer as the source. `nil` if the
|
||||
* given pixel buffer is `nil` or invalid.
|
||||
|
@ -123,7 +123,7 @@ NS_SWIFT_NAME(MPImage)
|
|||
* `MPPImage` instance for the duration of its lifecycle.
|
||||
* @param orientation The display orientation of the image.
|
||||
* @param error An optional error parameter populated when there is an error in initializing the
|
||||
* `MPPImage`.
|
||||
* `MPPImage`.
|
||||
*
|
||||
* @return A new `MPPImage` instance with the given orientation and pixel buffer as the source.
|
||||
* `nil` if the given pixel buffer is `nil` or invalid.
|
||||
|
@ -136,16 +136,16 @@ NS_SWIFT_NAME(MPImage)
|
|||
* Initializes an `MPPImage` object with the given sample buffer.
|
||||
*
|
||||
* The orientation of the newly created `MPPImage` will be `UIImageOrientationUp`.
|
||||
* Hence, if this image is used as input for any MediaPipe vision tasks, inference will be performed
|
||||
* on the it without any rotation. To create an `MPPImage` with a different orientation, please
|
||||
* use `[MPPImage initWithSampleBuffer:orientation:error:]`.
|
||||
* Hence, if this image is used as input for any MediaPipe vision tasks, inference will be
|
||||
* performed on the it without any rotation. To create an `MPPImage` with a different orientation,
|
||||
* please use `[MPPImage initWithSampleBuffer:orientation:error:]`.
|
||||
*
|
||||
* @param sampleBuffer The sample buffer to use as the source. It will be retained by the new
|
||||
* `MPPImage` instance for the duration of its lifecycle. The sample buffer must be based on
|
||||
* a pixel buffer (not compressed data). In practice, it should be the video output of the camera on
|
||||
* an iOS device, not other arbitrary types of `CMSampleBuffer`s.
|
||||
* a pixel buffer (not compressed data). In practice, it should be the video output of the
|
||||
* camera on an iOS device, not other arbitrary types of `CMSampleBuffer`s.
|
||||
* @return A new `MPPImage` instance with the given sample buffer as the source. `nil` if the
|
||||
* given sample buffer is `nil` or invalid.
|
||||
* given sample buffer is `nil` or invalid.
|
||||
*/
|
||||
- (nullable instancetype)initWithSampleBuffer:(CMSampleBufferRef)sampleBuffer
|
||||
error:(NSError **)error;
|
||||
|
@ -158,11 +158,11 @@ NS_SWIFT_NAME(MPImage)
|
|||
*
|
||||
* @param sampleBuffer The sample buffer to use as the source. It will be retained by the new
|
||||
* `MPPImage` instance for the duration of its lifecycle. The sample buffer must be based on
|
||||
* a pixel buffer (not compressed data). In practice, it should be the video output of the camera on
|
||||
* an iOS device, not other arbitrary types of `CMSampleBuffer`s.
|
||||
* a pixel buffer (not compressed data). In practice, it should be the video output of the
|
||||
* camera on an iOS device, not other arbitrary types of `CMSampleBuffer`s.
|
||||
* @param orientation The display orientation of the image.
|
||||
* @return A new `MPPImage` instance with the given orientation and sample buffer as the source.
|
||||
* `nil` if the given sample buffer is `nil` or invalid.
|
||||
* `nil` if the given sample buffer is `nil` or invalid.
|
||||
*/
|
||||
- (nullable instancetype)initWithSampleBuffer:(CMSampleBufferRef)sampleBuffer
|
||||
orientation:(UIImageOrientation)orientation
|
||||
|
|
|
@ -13,3 +13,7 @@
|
|||
# limitations under the License.
|
||||
|
||||
licenses(["notice"])
|
||||
|
||||
exports_files([
|
||||
"version_script.lds",
|
||||
])
|
||||
|
|
|
@ -34,12 +34,17 @@ android_library(
|
|||
# The native library of all MediaPipe audio tasks.
|
||||
cc_binary(
|
||||
name = "libmediapipe_tasks_audio_jni.so",
|
||||
linkopts = [
|
||||
"-Wl,--no-undefined",
|
||||
"-Wl,--version-script,$(location //mediapipe/tasks/java:version_script.lds)",
|
||||
],
|
||||
linkshared = 1,
|
||||
linkstatic = 1,
|
||||
deps = [
|
||||
"//mediapipe/java/com/google/mediapipe/framework/jni:mediapipe_framework_jni",
|
||||
"//mediapipe/tasks/cc/audio/audio_classifier:audio_classifier_graph",
|
||||
"//mediapipe/tasks/cc/audio/audio_embedder:audio_embedder_graph",
|
||||
"//mediapipe/tasks/java:version_script.lds",
|
||||
"//mediapipe/tasks/java/com/google/mediapipe/tasks/core/jni:model_resources_cache_jni",
|
||||
],
|
||||
)
|
||||
|
|
|
@ -19,12 +19,17 @@ package(default_visibility = ["//visibility:public"])
|
|||
# The native library of all MediaPipe text tasks.
|
||||
cc_binary(
|
||||
name = "libmediapipe_tasks_text_jni.so",
|
||||
linkopts = [
|
||||
"-Wl,--no-undefined",
|
||||
"-Wl,--version-script,$(location //mediapipe/tasks/java:version_script.lds)",
|
||||
],
|
||||
linkshared = 1,
|
||||
linkstatic = 1,
|
||||
deps = [
|
||||
"//mediapipe/java/com/google/mediapipe/framework/jni:mediapipe_framework_jni",
|
||||
"//mediapipe/tasks/cc/text/text_classifier:text_classifier_graph",
|
||||
"//mediapipe/tasks/cc/text/text_embedder:text_embedder_graph",
|
||||
"//mediapipe/tasks/java:version_script.lds",
|
||||
"//mediapipe/tasks/java/com/google/mediapipe/tasks/core/jni:model_resources_cache_jni",
|
||||
],
|
||||
)
|
||||
|
|
|
@ -36,6 +36,10 @@ android_library(
|
|||
# The native library of all MediaPipe vision tasks.
|
||||
cc_binary(
|
||||
name = "libmediapipe_tasks_vision_jni.so",
|
||||
linkopts = [
|
||||
"-Wl,--no-undefined",
|
||||
"-Wl,--version-script,$(location //mediapipe/tasks/java:version_script.lds)",
|
||||
],
|
||||
linkshared = 1,
|
||||
linkstatic = 1,
|
||||
deps = [
|
||||
|
@ -46,6 +50,7 @@ cc_binary(
|
|||
"//mediapipe/tasks/cc/vision/image_embedder:image_embedder_graph",
|
||||
"//mediapipe/tasks/cc/vision/image_segmenter:image_segmenter_graph",
|
||||
"//mediapipe/tasks/cc/vision/object_detector:object_detector_graph",
|
||||
"//mediapipe/tasks/java:version_script.lds",
|
||||
"//mediapipe/tasks/java/com/google/mediapipe/tasks/core/jni:model_resources_cache_jni",
|
||||
],
|
||||
)
|
||||
|
|
24
mediapipe/tasks/java/version_script.lds
Normal file
24
mediapipe/tasks/java/version_script.lds
Normal file
|
@ -0,0 +1,24 @@
|
|||
VERS_1.0 {
|
||||
# Export JNI and native C symbols.
|
||||
global:
|
||||
Java_com_google_mediapipe_framework_AndroidAssetUtil*;
|
||||
Java_com_google_mediapipe_framework_AndroidPacketCreator*;
|
||||
Java_com_google_mediapipe_framework_Graph_nativeAddMultiStreamCallback;
|
||||
Java_com_google_mediapipe_framework_Graph_nativeAddPacketToInputStream;
|
||||
Java_com_google_mediapipe_framework_Graph_nativeCloseAllPacketSources;
|
||||
Java_com_google_mediapipe_framework_Graph_nativeCreateGraph;
|
||||
Java_com_google_mediapipe_framework_Graph_nativeLoadBinaryGraph*;
|
||||
Java_com_google_mediapipe_framework_Graph_nativeMovePacketToInputStream;
|
||||
Java_com_google_mediapipe_framework_Graph_nativeReleaseGraph;
|
||||
Java_com_google_mediapipe_framework_Graph_nativeStartRunningGraph;
|
||||
Java_com_google_mediapipe_framework_Graph_nativeWaitUntilGraphDone;
|
||||
Java_com_google_mediapipe_framework_Graph_nativeWaitUntilGraphIdle;
|
||||
Java_com_google_mediapipe_framework_PacketCreator*;
|
||||
Java_com_google_mediapipe_framework_PacketGetter*;
|
||||
Java_com_google_mediapipe_framework_Packet*;
|
||||
Java_com_google_mediapipe_tasks_core_ModelResourcesCache*;
|
||||
|
||||
# Hide everything else.
|
||||
local:
|
||||
*;
|
||||
};
|
|
@ -1,7 +1,7 @@
|
|||
diff --git a/tensorflow/core/lib/monitoring/percentile_sampler.cc b/tensorflow/core/lib/monitoring/percentile_sampler.cc
|
||||
diff --git a/tensorflow/tsl/lib/monitoring/percentile_sampler.cc b/tensorflow/tsl/lib/monitoring/percentile_sampler.cc
|
||||
index b7c22ae77ba..d0ba7b48b4b 100644
|
||||
--- a/tensorflow/core/lib/monitoring/percentile_sampler.cc
|
||||
+++ b/tensorflow/core/lib/monitoring/percentile_sampler.cc
|
||||
--- a/tensorflow/tsl/lib/monitoring/percentile_sampler.cc
|
||||
+++ b/tensorflow/tsl/lib/monitoring/percentile_sampler.cc
|
||||
@@ -29,7 +29,8 @@ namespace monitoring {
|
||||
void PercentileSamplerCell::Add(double sample) {
|
||||
uint64 nstime = EnvTime::NowNanos();
|
||||
|
@ -23,18 +23,6 @@ index b7c22ae77ba..d0ba7b48b4b 100644
|
|||
pct_samples.points.push_back(pct);
|
||||
}
|
||||
}
|
||||
diff --git a/tensorflow/core/platform/test.h b/tensorflow/core/platform/test.h
|
||||
index b598b6ee1e4..51c013a2d62 100644
|
||||
--- a/tensorflow/core/platform/test.h
|
||||
+++ b/tensorflow/core/platform/test.h
|
||||
@@ -40,7 +40,6 @@ limitations under the License.
|
||||
// better error messages, more maintainable tests and more test coverage.
|
||||
#if !defined(PLATFORM_GOOGLE) && !defined(PLATFORM_GOOGLE_ANDROID) && \
|
||||
!defined(PLATFORM_CHROMIUMOS)
|
||||
-#include <gmock/gmock-generated-matchers.h>
|
||||
#include <gmock/gmock-matchers.h>
|
||||
#include <gmock/gmock-more-matchers.h>
|
||||
#endif
|
||||
diff --git a/third_party/eigen3/eigen_archive.BUILD b/third_party/eigen3/eigen_archive.BUILD
|
||||
index 5514f774c35..1a38f76f4e9 100644
|
||||
--- a/third_party/eigen3/eigen_archive.BUILD
|
||||
|
|
48
third_party/wasm_files.bzl
vendored
48
third_party/wasm_files.bzl
vendored
|
@ -12,72 +12,72 @@ def wasm_files():
|
|||
|
||||
http_file(
|
||||
name = "com_google_mediapipe_wasm_audio_wasm_internal_js",
|
||||
sha256 = "d4d205d08e3e1b09662a9a358d0107e8a8023827ba9b6982a3777bb6c040f936",
|
||||
urls = ["https://storage.googleapis.com/mediapipe-assets/wasm/audio_wasm_internal.js?generation=1673996821002628"],
|
||||
sha256 = "65139435bd64ff2f7791145e3b84b90200ba97edf78ea2a0feff7964dd9f5b9a",
|
||||
urls = ["https://storage.googleapis.com/mediapipe-assets/wasm/audio_wasm_internal.js?generation=1675786135168186"],
|
||||
)
|
||||
|
||||
http_file(
|
||||
name = "com_google_mediapipe_wasm_audio_wasm_internal_wasm",
|
||||
sha256 = "1b2ffe82b0a25d20188237a724a7cad68d068818a7738f91c69c782314f55965",
|
||||
urls = ["https://storage.googleapis.com/mediapipe-assets/wasm/audio_wasm_internal.wasm?generation=1673996823772372"],
|
||||
sha256 = "b0aa60df4388ae2adee9ddf8e1f37932518266e088ecd531756e16d147ef5f7b",
|
||||
urls = ["https://storage.googleapis.com/mediapipe-assets/wasm/audio_wasm_internal.wasm?generation=1675786138391747"],
|
||||
)
|
||||
|
||||
http_file(
|
||||
name = "com_google_mediapipe_wasm_audio_wasm_nosimd_internal_js",
|
||||
sha256 = "1f367c2d667628b178251aec7fd464327351570edac4549450b11fb82f5f0fd4",
|
||||
urls = ["https://storage.googleapis.com/mediapipe-assets/wasm/audio_wasm_nosimd_internal.js?generation=1673996826132845"],
|
||||
sha256 = "5e5d4975f5bf74b0d5f5601954ea221d73c4ee4f845e331a43244896ce0423de",
|
||||
urls = ["https://storage.googleapis.com/mediapipe-assets/wasm/audio_wasm_nosimd_internal.js?generation=1675786141452578"],
|
||||
)
|
||||
|
||||
http_file(
|
||||
name = "com_google_mediapipe_wasm_audio_wasm_nosimd_internal_wasm",
|
||||
sha256 = "35c6ad888c06025dba1f9c8edb70e6c7be7e94e45dc2c0236a2fcfe61991dc44",
|
||||
urls = ["https://storage.googleapis.com/mediapipe-assets/wasm/audio_wasm_nosimd_internal.wasm?generation=1673996828935550"],
|
||||
sha256 = "c2aed5747c85431b5c4f44947811bf19ca964a60ac3d2aab33e15612840da0a9",
|
||||
urls = ["https://storage.googleapis.com/mediapipe-assets/wasm/audio_wasm_nosimd_internal.wasm?generation=1675786144663772"],
|
||||
)
|
||||
|
||||
http_file(
|
||||
name = "com_google_mediapipe_wasm_text_wasm_internal_js",
|
||||
sha256 = "68c0134e0b3cb986c3526cd645f74cc5a1f6ab19292276ca7d3558b89801e205",
|
||||
urls = ["https://storage.googleapis.com/mediapipe-assets/wasm/text_wasm_internal.js?generation=1673996831356232"],
|
||||
sha256 = "14f408878d72139c81dafea6ca4ee4301d84ba5651ead9ac170f253dd3b0b6cd",
|
||||
urls = ["https://storage.googleapis.com/mediapipe-assets/wasm/text_wasm_internal.js?generation=1675786147103241"],
|
||||
)
|
||||
|
||||
http_file(
|
||||
name = "com_google_mediapipe_wasm_text_wasm_internal_wasm",
|
||||
sha256 = "df82bb192ea852dc1bcc8f9f28fbd8c3d6b219dc4fec2b2a92451678d98ee1f0",
|
||||
urls = ["https://storage.googleapis.com/mediapipe-assets/wasm/text_wasm_internal.wasm?generation=1673996834657078"],
|
||||
sha256 = "9807d302c5d020c2f49d1132ab9d9c717bcb9a18a01efa1b7993de1e9cab193b",
|
||||
urls = ["https://storage.googleapis.com/mediapipe-assets/wasm/text_wasm_internal.wasm?generation=1675786150390358"],
|
||||
)
|
||||
|
||||
http_file(
|
||||
name = "com_google_mediapipe_wasm_text_wasm_nosimd_internal_js",
|
||||
sha256 = "de1a4aabefb2e42ae4fee68b7e762e328623a163257a7ddc72365fc2502bd090",
|
||||
urls = ["https://storage.googleapis.com/mediapipe-assets/wasm/text_wasm_nosimd_internal.js?generation=1673996837104551"],
|
||||
sha256 = "5f25b455c989c80c86c4b4941118af8a4a82518eaebdb3d019bea674761160f9",
|
||||
urls = ["https://storage.googleapis.com/mediapipe-assets/wasm/text_wasm_nosimd_internal.js?generation=1675786153084313"],
|
||||
)
|
||||
|
||||
http_file(
|
||||
name = "com_google_mediapipe_wasm_text_wasm_nosimd_internal_wasm",
|
||||
sha256 = "828dd1e73fa9478a97a62539117f92b813833ab35d37a986c466df15a8cfdc7b",
|
||||
urls = ["https://storage.googleapis.com/mediapipe-assets/wasm/text_wasm_nosimd_internal.wasm?generation=1673996840120504"],
|
||||
sha256 = "ec66757749832ddf5e7d8754a002f19bc4f0ce7539fc86be502afda376cc2e47",
|
||||
urls = ["https://storage.googleapis.com/mediapipe-assets/wasm/text_wasm_nosimd_internal.wasm?generation=1675786156694332"],
|
||||
)
|
||||
|
||||
http_file(
|
||||
name = "com_google_mediapipe_wasm_vision_wasm_internal_js",
|
||||
sha256 = "c146b68523c256d41132230e811fc224dafb6a0bce6fc318c29dad37dfac06de",
|
||||
urls = ["https://storage.googleapis.com/mediapipe-assets/wasm/vision_wasm_internal.js?generation=1673996842448396"],
|
||||
sha256 = "97783273ec64885e1e0c56152d3b87ea487f66be3a1dfa9d87d4550d01d852cc",
|
||||
urls = ["https://storage.googleapis.com/mediapipe-assets/wasm/vision_wasm_internal.js?generation=1675786159557943"],
|
||||
)
|
||||
|
||||
http_file(
|
||||
name = "com_google_mediapipe_wasm_vision_wasm_internal_wasm",
|
||||
sha256 = "8dbccaaf944ef1251cf78190450ab7074abea233e18ebb37d2c2ce0f18d14a0c",
|
||||
urls = ["https://storage.googleapis.com/mediapipe-assets/wasm/vision_wasm_internal.wasm?generation=1673996845499070"],
|
||||
sha256 = "f164caa065d57661cac31c36ebe1d3879d2618a9badee950312c682b7b5422d9",
|
||||
urls = ["https://storage.googleapis.com/mediapipe-assets/wasm/vision_wasm_internal.wasm?generation=1675786162975564"],
|
||||
)
|
||||
|
||||
http_file(
|
||||
name = "com_google_mediapipe_wasm_vision_wasm_nosimd_internal_js",
|
||||
sha256 = "705f9e3c2c62d12903ea2cadc22d2c328bc890f96fffc47b51f989471196ecea",
|
||||
urls = ["https://storage.googleapis.com/mediapipe-assets/wasm/vision_wasm_nosimd_internal.js?generation=1673996847915731"],
|
||||
sha256 = "781d0c8e49d8c231ca5ae9b70effc57c067936c56d4eea4f8e5c5fb68865e17f",
|
||||
urls = ["https://storage.googleapis.com/mediapipe-assets/wasm/vision_wasm_nosimd_internal.js?generation=1675786165851806"],
|
||||
)
|
||||
|
||||
http_file(
|
||||
name = "com_google_mediapipe_wasm_vision_wasm_nosimd_internal_wasm",
|
||||
sha256 = "c7ff6a7d8dc22380e2e8457a15a51b6bc1e70c6262fecca25825f54ecc593d1f",
|
||||
urls = ["https://storage.googleapis.com/mediapipe-assets/wasm/vision_wasm_nosimd_internal.wasm?generation=1673996850980344"],
|
||||
sha256 = "7636c15555e9ba715afd6f0c64d7150ba39a82fc1fca659799d05cdbaccfe396",
|
||||
urls = ["https://storage.googleapis.com/mediapipe-assets/wasm/vision_wasm_nosimd_internal.wasm?generation=1675786169149137"],
|
||||
)
|
||||
|
|
Loading…
Reference in New Issue
Block a user