diff --git a/docs/getting_started/faq.md b/docs/getting_started/faq.md index 835b3429a..b7c24e6ec 100644 --- a/docs/getting_started/faq.md +++ b/docs/getting_started/faq.md @@ -60,7 +60,7 @@ The second approach allows up to [`max_in_flight`] invocations of the packets from [`CalculatorBase::Process`] are automatically ordered by timestamp before they are passed along to downstream calculators. -With either aproach, you must be aware that the calculator running in parallel +With either approach, you must be aware that the calculator running in parallel cannot maintain internal state in the same way as a normal sequential calculator. diff --git a/docs/getting_started/help.md b/docs/getting_started/help.md index e483e5a16..3ba052741 100644 --- a/docs/getting_started/help.md +++ b/docs/getting_started/help.md @@ -38,8 +38,8 @@ If you open a GitHub issue, here is our policy: - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: - **Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device**: - **Bazel version**: -- **Android Studio, NDK, SDK versions (if issue is related to building in mobile dev enviroment)**: -- **Xcode & Tulsi version (if issue is related to building in mobile dev enviroment)**: +- **Android Studio, NDK, SDK versions (if issue is related to building in mobile dev environment)**: +- **Xcode & Tulsi version (if issue is related to building in mobile dev environment)**: - **Exact steps to reproduce**: ### Describe the problem diff --git a/docs/getting_started/javascript.md b/docs/getting_started/javascript.md index 5a8b950c7..79269827b 100644 --- a/docs/getting_started/javascript.md +++ b/docs/getting_started/javascript.md @@ -44,7 +44,7 @@ snippets. | Browser | Platform | Notes | | ------- | ----------------------- | -------------------------------------- | -| Chrome | Android / Windows / Mac | Pixel 4 and older unsupported. Fuschia | +| Chrome | Android / Windows / Mac | Pixel 4 and older unsupported. Fuchsia | | | | unsupported. | | Chrome | iOS | Camera unavailable in Chrome on iOS. | | Safari | iPad/iPhone/Mac | iOS and Safari on iPad / iPhone / | diff --git a/docs/getting_started/troubleshooting.md b/docs/getting_started/troubleshooting.md index 39850c199..0da25497d 100644 --- a/docs/getting_started/troubleshooting.md +++ b/docs/getting_started/troubleshooting.md @@ -66,7 +66,7 @@ WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/gith ``` usually indicates that Bazel fails to download necessary dependency repositories -that MediaPipe needs. MedaiPipe has several dependency repositories that are +that MediaPipe needs. MediaPipe has several dependency repositories that are hosted by Google sites. In some regions, you may need to set up a network proxy or use a VPN to access those resources. You may also need to append `--host_jvm_args "-DsocksProxyHost= -DsocksProxyPort="` diff --git a/docs/solutions/face_mesh.md b/docs/solutions/face_mesh.md index 399c6c610..84fbb22a5 100644 --- a/docs/solutions/face_mesh.md +++ b/docs/solutions/face_mesh.md @@ -143,7 +143,7 @@ about the model in this [paper](https://arxiv.org/abs/2006.10962). The [Face Landmark Model](#face-landmark-model) performs a single-camera face landmark detection in the screen coordinate space: the X- and Y- coordinates are normalized screen coordinates, while the Z coordinate is relative and is scaled -as the X coodinate under the +as the X coordinate under the [weak perspective projection camera model](https://en.wikipedia.org/wiki/3D_projection#Weak_perspective_projection). This format is well-suited for some applications, however it does not directly enable the full spectrum of augmented reality (AR) features like aligning a diff --git a/docs/solutions/iris.md b/docs/solutions/iris.md index 23fdf91ab..b8459a0e3 100644 --- a/docs/solutions/iris.md +++ b/docs/solutions/iris.md @@ -48,7 +48,7 @@ camera, in real-time, without the need for specialized hardware. Through use of iris landmarks, the solution is also able to determine the metric distance between the subject and the camera with relative error less than 10%. Note that iris tracking does not infer the location at which people are looking, nor does -it provide any form of identity recognition. With the cross-platfrom capability +it provide any form of identity recognition. With the cross-platform capability of the MediaPipe framework, MediaPipe Iris can run on most modern [mobile phones](#mobile), [desktops/laptops](#desktop) and even on the [web](#web). @@ -109,7 +109,7 @@ You can also find more details in this ### Iris Landmark Model The iris model takes an image patch of the eye region and estimates both the eye -landmarks (along the eyelid) and iris landmarks (along ths iris contour). You +landmarks (along the eyelid) and iris landmarks (along this iris contour). You can find more details in this [paper](https://arxiv.org/abs/2006.11341). ![iris_tracking_eye_and_iris_landmarks.png](https://mediapipe.dev/images/mobile/iris_tracking_eye_and_iris_landmarks.png) | diff --git a/docs/solutions/media_sequence.md b/docs/solutions/media_sequence.md index 589325c37..5c479ea4c 100644 --- a/docs/solutions/media_sequence.md +++ b/docs/solutions/media_sequence.md @@ -95,7 +95,7 @@ process new data sets, in the documentation of MediaSequence uses SequenceExamples as the format of both inputs and outputs. Annotations are encoded as inputs in a SequenceExample of metadata - that defines the labels and the path to the cooresponding video file. This + that defines the labels and the path to the corresponding video file. This metadata is passed as input to the C++ `media_sequence_demo` binary, and the output is a SequenceExample filled with images and annotations ready for model training. diff --git a/docs/solutions/objectron.md b/docs/solutions/objectron.md index 9dffc7938..4ffb27bd0 100644 --- a/docs/solutions/objectron.md +++ b/docs/solutions/objectron.md @@ -180,7 +180,7 @@ and a The detection subgraph performs ML inference only once every few frames to reduce computation load, and decodes the output tensor to a FrameAnnotation that contains nine keypoints: the 3D bounding box's center and its eight vertices. -The tracking subgraph runs every frame, using the box traker in +The tracking subgraph runs every frame, using the box tracker in [MediaPipe Box Tracking](./box_tracking.md) to track the 2D box tightly enclosing the projection of the 3D bounding box, and lifts the tracked 2D keypoints to 3D with @@ -623,7 +623,7 @@ z_ndc = 1 / Z ### Pixel Space -In this API we set upper-left coner of an image as the origin of pixel +In this API we set upper-left corner of an image as the origin of pixel coordinate. One can convert from NDC to pixel space as follows: ```