Project import generated by Copybara.

GitOrigin-RevId: 852dfb05d450167899c0dd5ef7c45622a12e865b
This commit is contained in:
MediaPipe Team 2020-02-10 13:27:13 -08:00 committed by Hadon Nash
parent d144e564d8
commit de4fbc10e6
100 changed files with 1664 additions and 628 deletions

View File

@ -1,7 +1,7 @@
![MediaPipe](mediapipe/docs/images/mediapipe_small.png?raw=true "MediaPipe logo") ![MediaPipe](mediapipe/docs/images/mediapipe_small.png?raw=true "MediaPipe logo")
======================================================================= =======================================================================
[MediaPipe](http://mediapipe.dev) is a framework for building multimodal (eg. video, audio, any time series data) applied ML pipelines. With MediaPipe, a perception pipeline can be built as a graph of modular components, including, for instance, inference models (e.g., TensorFlow, TFLite) and media processing functions. [MediaPipe](http://mediapipe.dev) is a framework for building multimodal (eg. video, audio, any time series data), cross platform (i.e Android, iOS, web, edge devices) applied ML pipelines. With MediaPipe, a perception pipeline can be built as a graph of modular components, including, for instance, inference models (e.g., TensorFlow, TFLite) and media processing functions.
![Real-time Face Detection](mediapipe/docs/images/realtime_face_detection.gif) ![Real-time Face Detection](mediapipe/docs/images/realtime_face_detection.gif)
@ -9,17 +9,17 @@
## ML Solutions in MediaPipe ## ML Solutions in MediaPipe
* [Hand Tracking](mediapipe/docs/hand_tracking_mobile_gpu.md) * [Face Detection](mediapipe/docs/face_detection_mobile_gpu.md) [[Web Demo]](https://viz.mediapipe.dev/runner/demos/face_detection/face_detection.html)
* [Multi-hand Tracking](mediapipe/docs/multi_hand_tracking_mobile_gpu.md) * [Multi-hand Tracking](mediapipe/docs/multi_hand_tracking_mobile_gpu.md)
* [Face Detection](mediapipe/docs/face_detection_mobile_gpu.md) * [Hand Tracking](mediapipe/docs/hand_tracking_mobile_gpu.md) [[Web Demo]](https://viz.mediapipe.dev/runner/demos/hand_tracking/hand_tracking.html)
* [Hair Segmentation](mediapipe/docs/hair_segmentation_mobile_gpu.md) * [Hair Segmentation](mediapipe/docs/hair_segmentation_mobile_gpu.md) [[Web Demo]](https://viz.mediapipe.dev/runner/demos/hair_segmentation/hair_segmentation.html)
* [Object Detection](mediapipe/docs/object_detection_mobile_gpu.md) * [Object Detection](mediapipe/docs/object_detection_mobile_gpu.md)
* [Object Detection and Tracking](mediapipe/docs/object_tracking_mobile_gpu.md) * [Object Detection and Tracking](mediapipe/docs/object_tracking_mobile_gpu.md)
* [AutoFlip](mediapipe/docs/autoflip.md) * [AutoFlip](mediapipe/docs/autoflip.md)
![hand_tracking](mediapipe/docs/images/mobile/hand_tracking_3d_android_gpu_small.gif)
![multi-hand_tracking](mediapipe/docs/images/mobile/multi_hand_tracking_android_gpu_small.gif)
![face_detection](mediapipe/docs/images/mobile/face_detection_android_gpu_small.gif) ![face_detection](mediapipe/docs/images/mobile/face_detection_android_gpu_small.gif)
![multi-hand_tracking](mediapipe/docs/images/mobile/multi_hand_tracking_android_gpu_small.gif)
![hand_tracking](mediapipe/docs/images/mobile/hand_tracking_3d_android_gpu_small.gif)
![hair_segmentation](mediapipe/docs/images/mobile/hair_segmentation_android_gpu_small.gif) ![hair_segmentation](mediapipe/docs/images/mobile/hair_segmentation_android_gpu_small.gif)
![object_tracking](mediapipe/docs/images/mobile/object_tracking_android_gpu_small.gif) ![object_tracking](mediapipe/docs/images/mobile/object_tracking_android_gpu_small.gif)
@ -29,6 +29,8 @@ Follow these [instructions](mediapipe/docs/install.md).
## Getting started ## Getting started
See mobile, desktop and Google Coral [examples](mediapipe/docs/examples.md). See mobile, desktop and Google Coral [examples](mediapipe/docs/examples.md).
Check out some web demos [[Edge detection]](https://viz.mediapipe.dev/runner/demos/edge_detection/edge_detection.html) [[Face detection]](https://viz.mediapipe.dev/runner/demos/face_detection/face_detection.html) [[Hand Tracking]](https://viz.mediapipe.dev/runner/demos/hand_tracking/hand_tracking.html)
## Documentation ## Documentation
[MediaPipe Read-the-Docs](https://mediapipe.readthedocs.io/) or [docs.mediapipe.dev](https://docs.mediapipe.dev) [MediaPipe Read-the-Docs](https://mediapipe.readthedocs.io/) or [docs.mediapipe.dev](https://docs.mediapipe.dev)
@ -37,10 +39,12 @@ Check out the [Examples page](https://mediapipe.readthedocs.io/en/latest/example
## Visualizing MediaPipe graphs ## Visualizing MediaPipe graphs
A web-based visualizer is hosted on [viz.mediapipe.dev](https://viz.mediapipe.dev/). Please also see instructions [here](mediapipe/docs/visualizer.md). A web-based visualizer is hosted on [viz.mediapipe.dev](https://viz.mediapipe.dev/). Please also see instructions [here](mediapipe/docs/visualizer.md).
## Community forum ## Videos
* [Discuss](https://groups.google.com/forum/#!forum/mediapipe) - General community discussion around MediaPipe * [YouTube Channel](https://www.youtube.com/channel/UCObqmpuSMx-usADtL_qdMAw)
## Publications ## Publications
* [Google Developer Blog: MediaPipe on the Web](https://mediapipe.page.link/webdevblog)
* [Google Developer Blog: Object Detection and Tracking using MediaPipe](https://mediapipe.page.link/objecttrackingblog)
* [On-Device, Real-Time Hand Tracking with MediaPipe](https://ai.googleblog.com/2019/08/on-device-real-time-hand-tracking-with.html) * [On-Device, Real-Time Hand Tracking with MediaPipe](https://ai.googleblog.com/2019/08/on-device-real-time-hand-tracking-with.html)
* [MediaPipe: A Framework for Building Perception Pipelines](https://arxiv.org/abs/1906.08172) * [MediaPipe: A Framework for Building Perception Pipelines](https://arxiv.org/abs/1906.08172)
@ -55,6 +59,9 @@ A web-based visualizer is hosted on [viz.mediapipe.dev](https://viz.mediapipe.de
* [Google Industry Workshop at ICIP 2019](http://2019.ieeeicip.org/?action=page4&id=14#Google) [Presentation](https://docs.google.com/presentation/d/e/2PACX-1vRIBBbO_LO9v2YmvbHHEt1cwyqH6EjDxiILjuT0foXy1E7g6uyh4CesB2DkkEwlRDO9_lWfuKMZx98T/pub?start=false&loop=false&delayms=3000&slide=id.g556cc1a659_0_5) on Sept 24 in Taipei, Taiwan * [Google Industry Workshop at ICIP 2019](http://2019.ieeeicip.org/?action=page4&id=14#Google) [Presentation](https://docs.google.com/presentation/d/e/2PACX-1vRIBBbO_LO9v2YmvbHHEt1cwyqH6EjDxiILjuT0foXy1E7g6uyh4CesB2DkkEwlRDO9_lWfuKMZx98T/pub?start=false&loop=false&delayms=3000&slide=id.g556cc1a659_0_5) on Sept 24 in Taipei, Taiwan
* [Open sourced at CVPR 2019](https://sites.google.com/corp/view/perception-cv4arvr/mediapipe) on June 17~20 in Long Beach, CA * [Open sourced at CVPR 2019](https://sites.google.com/corp/view/perception-cv4arvr/mediapipe) on June 17~20 in Long Beach, CA
## Community forum
* [Discuss](https://groups.google.com/forum/#!forum/mediapipe) - General community discussion around MediaPipe
## Alpha Disclaimer ## Alpha Disclaimer
MediaPipe is currently in alpha for v0.6. We are still making breaking API changes and expect to get to stable API by v1.0. MediaPipe is currently in alpha for v0.6. We are still making breaking API changes and expect to get to stable API by v1.0.

View File

@ -78,6 +78,14 @@ http_archive(
], ],
) )
# easyexif
http_archive(
name = "easyexif",
url = "https://github.com/mayanklahiri/easyexif/archive/master.zip",
strip_prefix = "easyexif-master",
build_file = "@//third_party:easyexif.BUILD",
)
# libyuv # libyuv
http_archive( http_archive(
name = "libyuv", name = "libyuv",

View File

@ -86,6 +86,15 @@ proto_library(
], ],
) )
proto_library(
name = "constant_side_packet_calculator_proto",
srcs = ["constant_side_packet_calculator.proto"],
visibility = ["//visibility:public"],
deps = [
"//mediapipe/framework:calculator_proto",
],
)
proto_library( proto_library(
name = "clip_vector_size_calculator_proto", name = "clip_vector_size_calculator_proto",
srcs = ["clip_vector_size_calculator.proto"], srcs = ["clip_vector_size_calculator.proto"],
@ -173,6 +182,14 @@ mediapipe_cc_proto_library(
deps = [":gate_calculator_proto"], deps = [":gate_calculator_proto"],
) )
mediapipe_cc_proto_library(
name = "constant_side_packet_calculator_cc_proto",
srcs = ["constant_side_packet_calculator.proto"],
cc_deps = ["//mediapipe/framework:calculator_cc_proto"],
visibility = ["//visibility:public"],
deps = [":constant_side_packet_calculator_proto"],
)
cc_library( cc_library(
name = "add_header_calculator", name = "add_header_calculator",
srcs = ["add_header_calculator.cc"], srcs = ["add_header_calculator.cc"],
@ -960,3 +977,30 @@ cc_test(
"@com_google_absl//absl/memory", "@com_google_absl//absl/memory",
], ],
) )
cc_library(
name = "constant_side_packet_calculator",
srcs = ["constant_side_packet_calculator.cc"],
visibility = ["//visibility:public"],
deps = [
":constant_side_packet_calculator_cc_proto",
"//mediapipe/framework:calculator_framework",
"//mediapipe/framework:collection_item_id",
"//mediapipe/framework/port:ret_check",
"//mediapipe/framework/port:status",
],
alwayslink = 1,
)
cc_test(
name = "constant_side_packet_calculator_test",
srcs = ["constant_side_packet_calculator_test.cc"],
deps = [
":constant_side_packet_calculator",
"//mediapipe/framework:calculator_framework",
"//mediapipe/framework/port:gtest_main",
"//mediapipe/framework/port:parse_text_proto",
"//mediapipe/framework/port:status",
"@com_google_absl//absl/strings",
],
)

View File

@ -0,0 +1,116 @@
// Copyright 2020 The MediaPipe Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include <string>
#include "mediapipe/calculators/core/constant_side_packet_calculator.pb.h"
#include "mediapipe/framework/calculator_framework.h"
#include "mediapipe/framework/collection_item_id.h"
#include "mediapipe/framework/port/canonical_errors.h"
#include "mediapipe/framework/port/ret_check.h"
#include "mediapipe/framework/port/status.h"
namespace mediapipe {
// Generates an output side packet or multiple output side packets according to
// the specified options.
//
// Example configs:
// node {
// calculator: "ConstantSidePacketCalculator"
// output_side_packet: "PACKET:packet"
// options: {
// [mediapipe.ConstantSidePacketCalculatorOptions.ext]: {
// packet { int_value: 2 }
// }
// }
// }
//
// node {
// calculator: "ConstantSidePacketCalculator"
// output_side_packet: "PACKET:0:int_packet"
// output_side_packet: "PACKET:1:bool_packet"
// options: {
// [mediapipe.ConstantSidePacketCalculatorOptions.ext]: {
// packet { int_value: 2 }
// packet { bool_value: true }
// }
// }
// }
class ConstantSidePacketCalculator : public CalculatorBase {
public:
static ::mediapipe::Status GetContract(CalculatorContract* cc) {
const auto& options = cc->Options().GetExtension(
::mediapipe::ConstantSidePacketCalculatorOptions::ext);
RET_CHECK_EQ(cc->OutputSidePackets().NumEntries(kPacketTag),
options.packet_size())
<< "Number of output side packets has to be same as number of packets "
"configured in options.";
int index = 0;
for (CollectionItemId id = cc->OutputSidePackets().BeginId(kPacketTag);
id != cc->OutputSidePackets().EndId(kPacketTag); ++id, ++index) {
const auto& packet_options = options.packet(index);
auto& packet = cc->OutputSidePackets().Get(id);
if (packet_options.has_int_value()) {
packet.Set<int>();
} else if (packet_options.has_float_value()) {
packet.Set<float>();
} else if (packet_options.has_bool_value()) {
packet.Set<bool>();
} else if (packet_options.has_string_value()) {
packet.Set<std::string>();
} else {
return ::mediapipe::InvalidArgumentError(
"None of supported values were specified in options.");
}
}
return ::mediapipe::OkStatus();
}
::mediapipe::Status Open(CalculatorContext* cc) override {
const auto& options = cc->Options().GetExtension(
::mediapipe::ConstantSidePacketCalculatorOptions::ext);
int index = 0;
for (CollectionItemId id = cc->OutputSidePackets().BeginId(kPacketTag);
id != cc->OutputSidePackets().EndId(kPacketTag); ++id, ++index) {
auto& packet = cc->OutputSidePackets().Get(id);
const auto& packet_options = options.packet(index);
if (packet_options.has_int_value()) {
packet.Set(MakePacket<int>(packet_options.int_value()));
} else if (packet_options.has_float_value()) {
packet.Set(MakePacket<float>(packet_options.float_value()));
} else if (packet_options.has_bool_value()) {
packet.Set(MakePacket<bool>(packet_options.bool_value()));
} else if (packet_options.has_string_value()) {
packet.Set(MakePacket<std::string>(packet_options.string_value()));
} else {
return ::mediapipe::InvalidArgumentError(
"None of supported values were specified in options.");
}
}
return ::mediapipe::OkStatus();
}
::mediapipe::Status Process(CalculatorContext* cc) override {
return ::mediapipe::OkStatus();
}
private:
static constexpr const char* kPacketTag = "PACKET";
};
REGISTER_CALCULATOR(ConstantSidePacketCalculator);
} // namespace mediapipe

View File

@ -0,0 +1,36 @@
// Copyright 2020 The MediaPipe Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = "proto2";
package mediapipe;
import "mediapipe/framework/calculator.proto";
message ConstantSidePacketCalculatorOptions {
extend CalculatorOptions {
optional ConstantSidePacketCalculatorOptions ext = 291214597;
}
message ConstantSidePacket {
oneof value {
int32 int_value = 1;
float float_value = 2;
bool bool_value = 3;
string string_value = 4;
}
}
repeated ConstantSidePacket packet = 1;
}

View File

@ -0,0 +1,196 @@
// Copyright 2020 The MediaPipe Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include <string>
#include "absl/strings/string_view.h"
#include "absl/strings/substitute.h"
#include "mediapipe/framework/calculator_framework.h"
#include "mediapipe/framework/port/gmock.h"
#include "mediapipe/framework/port/gtest.h"
#include "mediapipe/framework/port/parse_text_proto.h"
#include "mediapipe/framework/port/status.h"
#include "mediapipe/framework/port/status_matchers.h"
namespace mediapipe {
template <typename T>
void DoTestSingleSidePacket(absl::string_view packet_spec,
const T& expected_value) {
static constexpr absl::string_view graph_config_template = R"(
node {
calculator: "ConstantSidePacketCalculator"
output_side_packet: "PACKET:packet"
options: {
[mediapipe.ConstantSidePacketCalculatorOptions.ext]: {
packet $0
}
}
}
)";
CalculatorGraphConfig graph_config =
::mediapipe::ParseTextProtoOrDie<CalculatorGraphConfig>(
absl::Substitute(graph_config_template, packet_spec));
CalculatorGraph graph;
MP_ASSERT_OK(graph.Initialize(graph_config));
MP_ASSERT_OK(graph.StartRun({}));
MP_ASSERT_OK(graph.WaitUntilIdle());
MP_ASSERT_OK(graph.GetOutputSidePacket("packet"));
auto actual_value =
graph.GetOutputSidePacket("packet").ValueOrDie().template Get<T>();
EXPECT_EQ(actual_value, expected_value);
}
TEST(ConstantSidePacketCalculatorTest, EveryPossibleType) {
DoTestSingleSidePacket("{ int_value: 2 }", 2);
DoTestSingleSidePacket("{ float_value: 6.5f }", 6.5f);
DoTestSingleSidePacket("{ bool_value: true }", true);
DoTestSingleSidePacket<std::string>(R"({ string_value: "str" })", "str");
}
TEST(ConstantSidePacketCalculatorTest, MultiplePackets) {
CalculatorGraphConfig graph_config =
::mediapipe::ParseTextProtoOrDie<CalculatorGraphConfig>(R"(
node {
calculator: "ConstantSidePacketCalculator"
output_side_packet: "PACKET:0:int_packet"
output_side_packet: "PACKET:1:float_packet"
output_side_packet: "PACKET:2:bool_packet"
output_side_packet: "PACKET:3:string_packet"
output_side_packet: "PACKET:4:another_string_packet"
output_side_packet: "PACKET:5:another_int_packet"
options: {
[mediapipe.ConstantSidePacketCalculatorOptions.ext]: {
packet { int_value: 256 }
packet { float_value: 0.5f }
packet { bool_value: false }
packet { string_value: "string" }
packet { string_value: "another string" }
packet { int_value: 128 }
}
}
}
)");
CalculatorGraph graph;
MP_ASSERT_OK(graph.Initialize(graph_config));
MP_ASSERT_OK(graph.StartRun({}));
MP_ASSERT_OK(graph.WaitUntilIdle());
MP_ASSERT_OK(graph.GetOutputSidePacket("int_packet"));
EXPECT_EQ(graph.GetOutputSidePacket("int_packet").ValueOrDie().Get<int>(),
256);
MP_ASSERT_OK(graph.GetOutputSidePacket("float_packet"));
EXPECT_EQ(graph.GetOutputSidePacket("float_packet").ValueOrDie().Get<float>(),
0.5f);
MP_ASSERT_OK(graph.GetOutputSidePacket("bool_packet"));
EXPECT_FALSE(
graph.GetOutputSidePacket("bool_packet").ValueOrDie().Get<bool>());
MP_ASSERT_OK(graph.GetOutputSidePacket("string_packet"));
EXPECT_EQ(graph.GetOutputSidePacket("string_packet")
.ValueOrDie()
.Get<std::string>(),
"string");
MP_ASSERT_OK(graph.GetOutputSidePacket("another_string_packet"));
EXPECT_EQ(graph.GetOutputSidePacket("another_string_packet")
.ValueOrDie()
.Get<std::string>(),
"another string");
MP_ASSERT_OK(graph.GetOutputSidePacket("another_int_packet"));
EXPECT_EQ(
graph.GetOutputSidePacket("another_int_packet").ValueOrDie().Get<int>(),
128);
}
TEST(ConstantSidePacketCalculatorTest, ProcessingPacketsWithCorrectTagOnly) {
CalculatorGraphConfig graph_config =
::mediapipe::ParseTextProtoOrDie<CalculatorGraphConfig>(R"(
node {
calculator: "ConstantSidePacketCalculator"
output_side_packet: "PACKET:0:int_packet"
output_side_packet: "no_tag0"
output_side_packet: "PACKET:1:float_packet"
output_side_packet: "INCORRECT_TAG:0:name1"
output_side_packet: "PACKET:2:bool_packet"
output_side_packet: "PACKET:3:string_packet"
output_side_packet: "no_tag2"
output_side_packet: "INCORRECT_TAG:1:name2"
options: {
[mediapipe.ConstantSidePacketCalculatorOptions.ext]: {
packet { int_value: 256 }
packet { float_value: 0.5f }
packet { bool_value: false }
packet { string_value: "string" }
}
}
}
)");
CalculatorGraph graph;
MP_ASSERT_OK(graph.Initialize(graph_config));
MP_ASSERT_OK(graph.StartRun({}));
MP_ASSERT_OK(graph.WaitUntilIdle());
MP_ASSERT_OK(graph.GetOutputSidePacket("int_packet"));
EXPECT_EQ(graph.GetOutputSidePacket("int_packet").ValueOrDie().Get<int>(),
256);
MP_ASSERT_OK(graph.GetOutputSidePacket("float_packet"));
EXPECT_EQ(graph.GetOutputSidePacket("float_packet").ValueOrDie().Get<float>(),
0.5f);
MP_ASSERT_OK(graph.GetOutputSidePacket("bool_packet"));
EXPECT_FALSE(
graph.GetOutputSidePacket("bool_packet").ValueOrDie().Get<bool>());
MP_ASSERT_OK(graph.GetOutputSidePacket("string_packet"));
EXPECT_EQ(graph.GetOutputSidePacket("string_packet")
.ValueOrDie()
.Get<std::string>(),
"string");
}
TEST(ConstantSidePacketCalculatorTest, IncorrectConfig_MoreOptionsThanPackets) {
CalculatorGraphConfig graph_config =
::mediapipe::ParseTextProtoOrDie<CalculatorGraphConfig>(R"(
node {
calculator: "ConstantSidePacketCalculator"
output_side_packet: "PACKET:int_packet"
options: {
[mediapipe.ConstantSidePacketCalculatorOptions.ext]: {
packet { int_value: 256 }
packet { float_value: 0.5f }
}
}
}
)");
CalculatorGraph graph;
EXPECT_FALSE(graph.Initialize(graph_config).ok());
}
TEST(ConstantSidePacketCalculatorTest, IncorrectConfig_MorePacketsThanOptions) {
CalculatorGraphConfig graph_config =
::mediapipe::ParseTextProtoOrDie<CalculatorGraphConfig>(R"(
node {
calculator: "ConstantSidePacketCalculator"
output_side_packet: "PACKET:0:int_packet"
output_side_packet: "PACKET:1:float_packet"
options: {
[mediapipe.ConstantSidePacketCalculatorOptions.ext]: {
packet { int_value: 256 }
}
}
}
)");
CalculatorGraph graph;
EXPECT_FALSE(graph.Initialize(graph_config).ok());
}
} // namespace mediapipe

View File

@ -17,6 +17,12 @@
#include <memory> #include <memory>
namespace { namespace {
// Reflect an integer against the lower and upper bound of an interval.
int64 ReflectBetween(int64 ts, int64 ts_min, int64 ts_max) {
if (ts < ts_min) return 2 * ts_min - ts - 1;
if (ts >= ts_max) return 2 * ts_max - ts - 1;
return ts;
}
// Creates a secure random number generator for use in ProcessWithJitter. // Creates a secure random number generator for use in ProcessWithJitter.
// If no secure random number generator can be constructed, the jitter // If no secure random number generator can be constructed, the jitter
@ -82,6 +88,7 @@ TimestampDiff TimestampDiffFromSeconds(double seconds) {
flush_last_packet_ = resampler_options.flush_last_packet(); flush_last_packet_ = resampler_options.flush_last_packet();
jitter_ = resampler_options.jitter(); jitter_ = resampler_options.jitter();
jitter_with_reflection_ = resampler_options.jitter_with_reflection();
input_data_id_ = cc->Inputs().GetId("DATA", 0); input_data_id_ = cc->Inputs().GetId("DATA", 0);
if (!input_data_id_.IsValid()) { if (!input_data_id_.IsValid()) {
@ -112,6 +119,8 @@ TimestampDiff TimestampDiffFromSeconds(double seconds) {
<< Timestamp::kTimestampUnitsPerSecond; << Timestamp::kTimestampUnitsPerSecond;
frame_time_usec_ = static_cast<int64>(1000000.0 / frame_rate_); frame_time_usec_ = static_cast<int64>(1000000.0 / frame_rate_);
jitter_usec_ = static_cast<int64>(1000000.0 * jitter_ / frame_rate_);
RET_CHECK_LE(jitter_usec_, frame_time_usec_);
video_header_.frame_rate = frame_rate_; video_header_.frame_rate = frame_rate_;
@ -188,12 +197,32 @@ TimestampDiff TimestampDiffFromSeconds(double seconds) {
void PacketResamplerCalculator::InitializeNextOutputTimestampWithJitter() { void PacketResamplerCalculator::InitializeNextOutputTimestampWithJitter() {
next_output_timestamp_min_ = first_timestamp_; next_output_timestamp_min_ = first_timestamp_;
if (jitter_with_reflection_) {
next_output_timestamp_ =
first_timestamp_ + random_->UnbiasedUniform64(frame_time_usec_);
return;
}
next_output_timestamp_ = next_output_timestamp_ =
first_timestamp_ + frame_time_usec_ * random_->RandFloat(); first_timestamp_ + frame_time_usec_ * random_->RandFloat();
} }
void PacketResamplerCalculator::UpdateNextOutputTimestampWithJitter() { void PacketResamplerCalculator::UpdateNextOutputTimestampWithJitter() {
packet_reservoir_->Clear(); packet_reservoir_->Clear();
if (jitter_with_reflection_) {
next_output_timestamp_min_ += frame_time_usec_;
Timestamp next_output_timestamp_max_ =
next_output_timestamp_min_ + frame_time_usec_;
next_output_timestamp_ += frame_time_usec_ +
random_->UnbiasedUniform64(2 * jitter_usec_ + 1) -
jitter_usec_;
next_output_timestamp_ = Timestamp(ReflectBetween(
next_output_timestamp_.Value(), next_output_timestamp_min_.Value(),
next_output_timestamp_max_.Value()));
CHECK_GE(next_output_timestamp_, next_output_timestamp_min_);
CHECK_LT(next_output_timestamp_, next_output_timestamp_max_);
return;
}
packet_reservoir_->Disable(); packet_reservoir_->Disable();
next_output_timestamp_ += next_output_timestamp_ +=
frame_time_usec_ * frame_time_usec_ *

View File

@ -49,6 +49,38 @@ class PacketReservoir {
// out of a stream. Given a desired frame rate, packets are going to be // out of a stream. Given a desired frame rate, packets are going to be
// removed or added to achieve it. // removed or added to achieve it.
// //
// If jitter_ is specified:
// - The first packet is chosen randomly (uniform distribution) among frames
// that correspond to timestamps [0, 1/frame_rate). Let the chosen packet
// correspond to timestamp t.
// - The next packet is chosen randomly (uniform distribution) among frames
// that correspond to [t+(1-jitter)/frame_rate, t+(1+jitter)/frame_rate].
// - if jitter_with_reflection_ is true, the timestamp will be reflected
// against the boundaries of [t_0 + (k-1)/frame_rate, t_0 + k/frame_rate)
// so that its marginal distribution is uniform within this interval.
// In the formula, t_0 is the timestamp of the first sampled
// packet, and the k is the packet index.
// See paper (https://arxiv.org/abs/2002.01147) for details.
// - t is updated and the process is repeated.
// - Note that seed is specified as input side packet for reproducibility of
// the resampling. For Cloud ML Video Intelligence API, the hash of the
// input video should serve this purpose. For YouTube, either video ID or
// content hex ID of the input video should do.
//
// If jitter_ is not specified:
// - The first packet defines the first_timestamp of the output stream,
// so it is always emitted.
// - If more packets are emitted, they will have timestamp equal to
// round(first_timestamp + k * period) , where k is a positive
// integer and the period is defined by the frame rate.
// Example: first_timestamp=0, fps=30, then the output stream
// will have timestamps: 0, 33333, 66667, 100000, etc...
// - The packets selected for the output stream are the ones closer
// to the exact middle point (33333.33, 66666.67 in our previous
// example). In case of ties, later packets are chosen.
// - 'Empty' periods happen when there are no packets for a long time
// (greater than a period). In this case, we send a copy of the last
// packet received before the empty period.
// The jitter feature is disabled by default. To enable it, you need to // The jitter feature is disabled by default. To enable it, you need to
// implement CreateSecureRandom(const std::string&). // implement CreateSecureRandom(const std::string&).
// //
@ -139,7 +171,12 @@ class PacketResamplerCalculator : public CalculatorBase {
// Jitter-related variables. // Jitter-related variables.
std::unique_ptr<RandomBase> random_; std::unique_ptr<RandomBase> random_;
double jitter_ = 0.0; double jitter_ = 0.0;
bool jitter_with_reflection_;
int64 jitter_usec_;
Timestamp next_output_timestamp_; Timestamp next_output_timestamp_;
// If jittering_with_reflection_ is true, next_output_timestamp_ will be
// kept within the interval
// [next_output_timestamp_min_, next_output_timestamp_min_ + frame_time_usec_)
Timestamp next_output_timestamp_min_; Timestamp next_output_timestamp_min_;
// If specified, output timestamps are aligned with base_timestamp. // If specified, output timestamps are aligned with base_timestamp.

View File

@ -66,6 +66,7 @@ message PacketResamplerCalculatorOptions {
// pseudo-random number generator does its job and the number of frames is // pseudo-random number generator does its job and the number of frames is
// sufficiently large, the average frame rate will be close to this value. // sufficiently large, the average frame rate will be close to this value.
optional double jitter = 4; optional double jitter = 4;
optional bool jitter_with_reflection = 9 [default = false];
// If specified, output timestamps are aligned with base_timestamp. // If specified, output timestamps are aligned with base_timestamp.
// Otherwise, they are aligned with the first input timestamp. // Otherwise, they are aligned with the first input timestamp.

View File

@ -332,6 +332,7 @@ cc_library(
cc_library( cc_library(
name = "image_cropping_calculator", name = "image_cropping_calculator",
srcs = ["image_cropping_calculator.cc"], srcs = ["image_cropping_calculator.cc"],
hdrs = ["image_cropping_calculator.h"],
copts = select({ copts = select({
"//mediapipe:apple": [ "//mediapipe:apple": [
"-x objective-c++", "-x objective-c++",
@ -371,6 +372,22 @@ cc_library(
alwayslink = 1, alwayslink = 1,
) )
cc_test(
name = "image_cropping_calculator_test",
srcs = ["image_cropping_calculator_test.cc"],
deps = [
":image_cropping_calculator",
":image_cropping_calculator_cc_proto",
"//mediapipe/framework:calculator_framework",
"//mediapipe/framework/formats:rect_cc_proto",
"//mediapipe/framework/port:gtest_main",
"//mediapipe/framework/port:parse_text_proto",
"//mediapipe/framework/port:status",
"//mediapipe/framework/tool:tag_map",
"//mediapipe/framework/tool:tag_map_helper",
],
)
cc_library( cc_library(
name = "luminance_calculator", name = "luminance_calculator",
srcs = ["luminance_calculator.cc"], srcs = ["luminance_calculator.cc"],

View File

@ -12,10 +12,10 @@
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
#include "mediapipe/calculators/image/image_cropping_calculator.h"
#include <cmath> #include <cmath>
#include "mediapipe/calculators/image/image_cropping_calculator.pb.h"
#include "mediapipe/framework/calculator_framework.h"
#include "mediapipe/framework/formats/image_frame.h" #include "mediapipe/framework/formats/image_frame.h"
#include "mediapipe/framework/formats/image_frame_opencv.h" #include "mediapipe/framework/formats/image_frame_opencv.h"
#include "mediapipe/framework/formats/rect.pb.h" #include "mediapipe/framework/formats/rect.pb.h"
@ -25,7 +25,6 @@
#include "mediapipe/framework/port/status.h" #include "mediapipe/framework/port/status.h"
#if !defined(MEDIAPIPE_DISABLE_GPU) #if !defined(MEDIAPIPE_DISABLE_GPU)
#include "mediapipe/gpu/gl_calculator_helper.h"
#include "mediapipe/gpu/gl_simple_shaders.h" #include "mediapipe/gpu/gl_simple_shaders.h"
#include "mediapipe/gpu/gpu_buffer.h" #include "mediapipe/gpu/gpu_buffer.h"
#include "mediapipe/gpu/shader_util.h" #include "mediapipe/gpu/shader_util.h"
@ -52,62 +51,6 @@ constexpr char kWidthTag[] = "WIDTH";
} // namespace } // namespace
// Crops the input texture to the given rectangle region. The rectangle can
// be at arbitrary location on the image with rotation. If there's rotation, the
// output texture will have the size of the input rectangle. The rotation should
// be in radian, see rect.proto for detail.
//
// Input:
// One of the following two tags:
// IMAGE - ImageFrame representing the input image.
// IMAGE_GPU - GpuBuffer representing the input image.
// One of the following two tags (optional if WIDTH/HEIGHT is specified):
// RECT - A Rect proto specifying the width/height and location of the
// cropping rectangle.
// NORM_RECT - A NormalizedRect proto specifying the width/height and location
// of the cropping rectangle in normalized coordinates.
// Alternative tags to RECT (optional if RECT/NORM_RECT is specified):
// WIDTH - The desired width of the output cropped image,
// based on image center
// HEIGHT - The desired height of the output cropped image,
// based on image center
//
// Output:
// One of the following two tags:
// IMAGE - Cropped ImageFrame
// IMAGE_GPU - Cropped GpuBuffer.
//
// Note: input_stream values take precedence over options defined in the graph.
//
class ImageCroppingCalculator : public CalculatorBase {
public:
ImageCroppingCalculator() = default;
~ImageCroppingCalculator() override = default;
static ::mediapipe::Status GetContract(CalculatorContract* cc);
::mediapipe::Status Open(CalculatorContext* cc) override;
::mediapipe::Status Process(CalculatorContext* cc) override;
::mediapipe::Status Close(CalculatorContext* cc) override;
private:
::mediapipe::Status RenderCpu(CalculatorContext* cc);
::mediapipe::Status RenderGpu(CalculatorContext* cc);
::mediapipe::Status InitGpu(CalculatorContext* cc);
void GlRender();
void GetOutputDimensions(CalculatorContext* cc, int src_width, int src_height,
int* dst_width, int* dst_height);
mediapipe::ImageCroppingCalculatorOptions options_;
bool use_gpu_ = false;
// Output texture corners (4) after transoformation in normalized coordinates.
float transformed_points_[8];
#if !defined(MEDIAPIPE_DISABLE_GPU)
bool gpu_initialized_ = false;
mediapipe::GlCalculatorHelper gpu_helper_;
GLuint program_ = 0;
#endif // !MEDIAPIPE_DISABLE_GPU
};
REGISTER_CALCULATOR(ImageCroppingCalculator); REGISTER_CALCULATOR(ImageCroppingCalculator);
::mediapipe::Status ImageCroppingCalculator::GetContract( ::mediapipe::Status ImageCroppingCalculator::GetContract(
@ -132,7 +75,11 @@ REGISTER_CALCULATOR(ImageCroppingCalculator);
} }
#endif // !MEDIAPIPE_DISABLE_GPU #endif // !MEDIAPIPE_DISABLE_GPU
RET_CHECK(cc->Inputs().HasTag(kRectTag) ^ cc->Inputs().HasTag(kNormRectTag)); RET_CHECK(cc->Inputs().HasTag(kRectTag) ^ cc->Inputs().HasTag(kNormRectTag) ^
(cc->Options<mediapipe::ImageCroppingCalculatorOptions>()
.has_norm_width() &&
cc->Options<mediapipe::ImageCroppingCalculatorOptions>()
.has_norm_height()));
if (cc->Inputs().HasTag(kRectTag)) { if (cc->Inputs().HasTag(kRectTag)) {
cc->Inputs().Tag(kRectTag).Set<Rect>(); cc->Inputs().Tag(kRectTag).Set<Rect>();
} }
@ -222,41 +169,8 @@ REGISTER_CALCULATOR(ImageCroppingCalculator);
const auto& input_img = cc->Inputs().Tag(kImageTag).Get<ImageFrame>(); const auto& input_img = cc->Inputs().Tag(kImageTag).Get<ImageFrame>();
cv::Mat input_mat = formats::MatView(&input_img); cv::Mat input_mat = formats::MatView(&input_img);
float rect_center_x = input_img.Width() / 2.0f; auto [target_width, target_height, rect_center_x, rect_center_y, rotation] =
float rect_center_y = input_img.Height() / 2.0f; GetCropSpecs(cc, input_img.Width(), input_img.Height());
float rotation = 0.0f;
int target_width = input_img.Width();
int target_height = input_img.Height();
if (cc->Inputs().HasTag(kRectTag)) {
const auto& rect = cc->Inputs().Tag(kRectTag).Get<Rect>();
if (rect.width() > 0 && rect.height() > 0 && rect.x_center() >= 0 &&
rect.y_center() >= 0) {
rect_center_x = rect.x_center();
rect_center_y = rect.y_center();
target_width = rect.width();
target_height = rect.height();
rotation = rect.rotation();
}
} else if (cc->Inputs().HasTag(kNormRectTag)) {
const auto& rect = cc->Inputs().Tag(kNormRectTag).Get<NormalizedRect>();
if (rect.width() > 0.0 && rect.height() > 0.0 && rect.x_center() >= 0.0 &&
rect.y_center() >= 0.0) {
rect_center_x = std::round(rect.x_center() * input_img.Width());
rect_center_y = std::round(rect.y_center() * input_img.Height());
target_width = std::round(rect.width() * input_img.Width());
target_height = std::round(rect.height() * input_img.Height());
rotation = rect.rotation();
}
} else {
if (cc->Inputs().HasTag(kWidthTag) && cc->Inputs().HasTag(kHeightTag)) {
target_width = cc->Inputs().Tag(kWidthTag).Get<int>();
target_height = cc->Inputs().Tag(kHeightTag).Get<int>();
} else if (options_.has_width() && options_.has_height()) {
target_width = options_.width();
target_height = options_.height();
}
rotation = options_.rotation();
}
const cv::RotatedRect min_rect(cv::Point2f(rect_center_x, rect_center_y), const cv::RotatedRect min_rect(cv::Point2f(rect_center_x, rect_center_y),
cv::Size2f(target_width, target_height), cv::Size2f(target_width, target_height),
@ -433,46 +347,8 @@ void ImageCroppingCalculator::GetOutputDimensions(CalculatorContext* cc,
int src_width, int src_height, int src_width, int src_height,
int* dst_width, int* dst_width,
int* dst_height) { int* dst_height) {
// Get the size of the cropping box. auto [crop_width, crop_height, x_center, y_center, rotation] =
int crop_width = src_width; GetCropSpecs(cc, src_width, src_height);
int crop_height = src_height;
// Get the center of cropping box. Default is the at the center.
int x_center = src_width / 2;
int y_center = src_height / 2;
// Get the rotation of the cropping box.
float rotation = 0.0f;
if (cc->Inputs().HasTag(kRectTag)) {
const auto& rect = cc->Inputs().Tag(kRectTag).Get<Rect>();
// Only use the rect if it is valid.
if (rect.width() > 0 && rect.height() > 0 && rect.x_center() >= 0 &&
rect.y_center() >= 0) {
x_center = rect.x_center();
y_center = rect.y_center();
crop_width = rect.width();
crop_height = rect.height();
rotation = rect.rotation();
}
} else if (cc->Inputs().HasTag(kNormRectTag)) {
const auto& rect = cc->Inputs().Tag(kNormRectTag).Get<NormalizedRect>();
// Only use the rect if it is valid.
if (rect.width() > 0.0 && rect.height() > 0.0 && rect.x_center() >= 0.0 &&
rect.y_center() >= 0.0) {
x_center = std::round(rect.x_center() * src_width);
y_center = std::round(rect.y_center() * src_height);
crop_width = std::round(rect.width() * src_width);
crop_height = std::round(rect.height() * src_height);
rotation = rect.rotation();
}
} else {
if (cc->Inputs().HasTag(kWidthTag) && cc->Inputs().HasTag(kHeightTag)) {
crop_width = cc->Inputs().Tag(kWidthTag).Get<int>();
crop_height = cc->Inputs().Tag(kHeightTag).Get<int>();
} else if (options_.has_width() && options_.has_height()) {
crop_width = options_.width();
crop_height = options_.height();
}
rotation = options_.rotation();
}
const float half_width = crop_width / 2.0f; const float half_width = crop_width / 2.0f;
const float half_height = crop_height / 2.0f; const float half_height = crop_height / 2.0f;
@ -508,4 +384,82 @@ void ImageCroppingCalculator::GetOutputDimensions(CalculatorContext* cc,
*dst_height = std::max(1, height); *dst_height = std::max(1, height);
} }
RectSpec ImageCroppingCalculator::GetCropSpecs(const CalculatorContext* cc,
int src_width, int src_height) {
// Get the size of the cropping box.
int crop_width = src_width;
int crop_height = src_height;
// Get the center of cropping box. Default is the at the center.
int x_center = src_width / 2;
int y_center = src_height / 2;
// Get the rotation of the cropping box.
float rotation = 0.0f;
// Get the normalized width and height if specified by the inputs or options.
float normalized_width = 0.0f;
float normalized_height = 0.0f;
mediapipe::ImageCroppingCalculatorOptions options =
cc->Options<mediapipe::ImageCroppingCalculatorOptions>();
// width/height, norm_width/norm_height from input streams take precednece.
if (cc->Inputs().HasTag(kRectTag)) {
const auto& rect = cc->Inputs().Tag(kRectTag).Get<Rect>();
// Only use the rect if it is valid.
if (rect.width() > 0 && rect.height() > 0 && rect.x_center() >= 0 &&
rect.y_center() >= 0) {
x_center = rect.x_center();
y_center = rect.y_center();
crop_width = rect.width();
crop_height = rect.height();
rotation = rect.rotation();
}
} else if (cc->Inputs().HasTag(kNormRectTag)) {
const auto& norm_rect =
cc->Inputs().Tag(kNormRectTag).Get<NormalizedRect>();
if (norm_rect.width() > 0.0 && norm_rect.height() > 0.0) {
normalized_width = norm_rect.width();
normalized_height = norm_rect.height();
x_center = std::round(norm_rect.x_center() * src_width);
y_center = std::round(norm_rect.y_center() * src_height);
rotation = norm_rect.rotation();
}
} else if (cc->Inputs().HasTag(kWidthTag) &&
cc->Inputs().HasTag(kHeightTag)) {
crop_width = cc->Inputs().Tag(kWidthTag).Get<int>();
crop_height = cc->Inputs().Tag(kHeightTag).Get<int>();
} else if (options.has_width() && options.has_height()) {
crop_width = options.width();
crop_height = options.height();
} else if (options.has_norm_width() && options.has_norm_height()) {
normalized_width = options.norm_width();
normalized_height = options.norm_height();
}
// Get the crop width and height from the normalized width and height.
if (normalized_width > 0 && normalized_height > 0) {
crop_width = std::round(normalized_width * src_width);
crop_height = std::round(normalized_height * src_height);
}
// Rotation and center values from input streams take precedence, so only
// look at those values in the options if kRectTag and kNormRectTag are not
// present from the inputs.
if (!cc->Inputs().HasTag(kRectTag) && !cc->Inputs().HasTag(kNormRectTag)) {
if (options.has_norm_center_x() && options.has_norm_center_y()) {
x_center = std::round(options.norm_center_x() * src_width);
y_center = std::round(options.norm_center_y() * src_height);
}
if (options.has_rotation()) {
rotation = options.rotation();
}
}
return {
.width = crop_width,
.height = crop_height,
.center_x = x_center,
.center_y = y_center,
.rotation = rotation,
};
}
} // namespace mediapipe } // namespace mediapipe

View File

@ -0,0 +1,86 @@
#ifndef MEDIAPIPE_CALCULATORS_IMAGE_IMAGE_CROPPING_CALCULATOR_H_
#define MEDIAPIPE_CALCULATORS_IMAGE_IMAGE_CROPPING_CALCULATOR_H_
#include "mediapipe/calculators/image/image_cropping_calculator.pb.h"
#include "mediapipe/framework/calculator_framework.h"
#if !defined(MEDIAPIPE_DISABLE_GPU)
#include "mediapipe/gpu/gl_calculator_helper.h"
#endif // !MEDIAPIPE_DISABLE_GPU
// Crops the input texture to the given rectangle region. The rectangle can
// be at arbitrary location on the image with rotation. If there's rotation, the
// output texture will have the size of the input rectangle. The rotation should
// be in radian, see rect.proto for detail.
//
// Input:
// One of the following two tags:
// IMAGE - ImageFrame representing the input image.
// IMAGE_GPU - GpuBuffer representing the input image.
// One of the following two tags (optional if WIDTH/HEIGHT is specified):
// RECT - A Rect proto specifying the width/height and location of the
// cropping rectangle.
// NORM_RECT - A NormalizedRect proto specifying the width/height and location
// of the cropping rectangle in normalized coordinates.
// Alternative tags to RECT (optional if RECT/NORM_RECT is specified):
// WIDTH - The desired width of the output cropped image,
// based on image center
// HEIGHT - The desired height of the output cropped image,
// based on image center
//
// Output:
// One of the following two tags:
// IMAGE - Cropped ImageFrame
// IMAGE_GPU - Cropped GpuBuffer.
//
// Note: input_stream values take precedence over options defined in the graph.
//
namespace mediapipe {
struct RectSpec {
int width;
int height;
int center_x;
int center_y;
float rotation;
bool operator==(const RectSpec& rect) const {
return (width == rect.width && height == rect.height &&
center_x == rect.center_x && center_y == rect.center_y &&
rotation == rect.rotation);
}
};
class ImageCroppingCalculator : public CalculatorBase {
public:
ImageCroppingCalculator() = default;
~ImageCroppingCalculator() override = default;
static ::mediapipe::Status GetContract(CalculatorContract* cc);
::mediapipe::Status Open(CalculatorContext* cc) override;
::mediapipe::Status Process(CalculatorContext* cc) override;
::mediapipe::Status Close(CalculatorContext* cc) override;
static RectSpec GetCropSpecs(const CalculatorContext* cc, int src_width,
int src_height);
private:
::mediapipe::Status RenderCpu(CalculatorContext* cc);
::mediapipe::Status RenderGpu(CalculatorContext* cc);
::mediapipe::Status InitGpu(CalculatorContext* cc);
void GlRender();
void GetOutputDimensions(CalculatorContext* cc, int src_width, int src_height,
int* dst_width, int* dst_height);
mediapipe::ImageCroppingCalculatorOptions options_;
bool use_gpu_ = false;
// Output texture corners (4) after transoformation in normalized coordinates.
float transformed_points_[8];
#if !defined(MEDIAPIPE_DISABLE_GPU)
bool gpu_initialized_ = false;
mediapipe::GlCalculatorHelper gpu_helper_;
GLuint program_ = 0;
#endif // !MEDIAPIPE_DISABLE_GPU
};
} // namespace mediapipe
#endif // MEDIAPIPE_CALCULATORS_IMAGE_IMAGE_CROPPING_CALCULATOR_H_

View File

@ -30,4 +30,14 @@ message ImageCroppingCalculatorOptions {
// Rotation angle is counter-clockwise in radian. // Rotation angle is counter-clockwise in radian.
optional float rotation = 3 [default = 0.0]; optional float rotation = 3 [default = 0.0];
// Normalized width and height of the output rect. Value is within [0, 1].
optional float norm_width = 4;
optional float norm_height = 5;
// Normalized location of the center of the output
// rectangle in image coordinates. Value is within [0, 1].
// The (0, 0) point is at the (top, left) corner.
optional float norm_center_x = 6 [default = 0];
optional float norm_center_y = 7 [default = 0];
} }

View File

@ -0,0 +1,216 @@
// Copyright 2020 The MediaPipe Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "mediapipe/calculators/image/image_cropping_calculator.h"
#include <cmath>
#include <memory>
#include "mediapipe/calculators/image/image_cropping_calculator.pb.h"
#include "mediapipe/framework/calculator_framework.h"
#include "mediapipe/framework/formats/rect.pb.h"
#include "mediapipe/framework/port/gtest.h"
#include "mediapipe/framework/port/parse_text_proto.h"
#include "mediapipe/framework/port/status_matchers.h"
#include "mediapipe/framework/tool/tag_map.h"
#include "mediapipe/framework/tool/tag_map_helper.h"
namespace mediapipe {
namespace {
constexpr int input_width = 100;
constexpr int input_height = 100;
constexpr char kRectTag[] = "RECT";
constexpr char kHeightTag[] = "HEIGHT";
constexpr char kWidthTag[] = "WIDTH";
// Test normal case, where norm_width and norm_height in options are set.
TEST(ImageCroppingCalculatorTest, GetCroppingDimensionsNormal) {
auto calculator_node =
ParseTextProtoOrDie<mediapipe::CalculatorGraphConfig::Node>(
R"(
calculator: "ImageCroppingCalculator"
input_stream: "IMAGE_GPU:input_frames"
output_stream: "IMAGE_GPU:cropped_output_frames"
options: {
[mediapipe.ImageCroppingCalculatorOptions.ext] {
norm_width: 0.6
norm_height: 0.6
norm_center_x: 0.5
norm_center_y: 0.5
rotation: 0.3
}
}
)");
auto calculator_state =
CalculatorState("Node", 0, "Calculator", calculator_node, nullptr);
auto cc =
CalculatorContext(&calculator_state, tool::CreateTagMap({}).ValueOrDie(),
tool::CreateTagMap({}).ValueOrDie());
RectSpec expectRect = {
.width = 60,
.height = 60,
.center_x = 50,
.center_y = 50,
.rotation = 0.3,
};
EXPECT_EQ(
ImageCroppingCalculator::GetCropSpecs(&cc, input_width, input_height),
expectRect);
} // TEST
// Test when (width height) + (norm_width norm_height) are set in options.
// width and height should take precedence.
TEST(ImageCroppingCalculatorTest, RedundantSpecInOptions) {
auto calculator_node =
ParseTextProtoOrDie<mediapipe::CalculatorGraphConfig::Node>(
R"(
calculator: "ImageCroppingCalculator"
input_stream: "IMAGE_GPU:input_frames"
output_stream: "IMAGE_GPU:cropped_output_frames"
options: {
[mediapipe.ImageCroppingCalculatorOptions.ext] {
width: 50
height: 50
norm_width: 0.6
norm_height: 0.6
norm_center_x: 0.5
norm_center_y: 0.5
rotation: 0.3
}
}
)");
auto calculator_state =
CalculatorState("Node", 0, "Calculator", calculator_node, nullptr);
auto cc =
CalculatorContext(&calculator_state, tool::CreateTagMap({}).ValueOrDie(),
tool::CreateTagMap({}).ValueOrDie());
RectSpec expectRect = {
.width = 50,
.height = 50,
.center_x = 50,
.center_y = 50,
.rotation = 0.3,
};
EXPECT_EQ(
ImageCroppingCalculator::GetCropSpecs(&cc, input_width, input_height),
expectRect);
} // TEST
// Test when WIDTH HEIGHT are set from input stream,
// and options has norm_width/height set.
// WIDTH HEIGHT from input stream should take precedence.
TEST(ImageCroppingCalculatorTest, RedundantSpectWithInputStream) {
auto calculator_node =
ParseTextProtoOrDie<mediapipe::CalculatorGraphConfig::Node>(
R"(
calculator: "ImageCroppingCalculator"
input_stream: "IMAGE_GPU:input_frames"
input_stream: "WIDTH:crop_width"
input_stream: "HEIGHT:crop_height"
output_stream: "IMAGE_GPU:cropped_output_frames"
options: {
[mediapipe.ImageCroppingCalculatorOptions.ext] {
width: 50
height: 50
norm_width: 0.6
norm_height: 0.6
norm_center_x: 0.5
norm_center_y: 0.5
rotation: 0.3
}
}
)");
auto calculator_state =
CalculatorState("Node", 0, "Calculator", calculator_node, nullptr);
auto inputTags = tool::CreateTagMap({
"HEIGHT:0:crop_height",
"WIDTH:0:crop_width",
})
.ValueOrDie();
auto cc = CalculatorContext(&calculator_state, inputTags,
tool::CreateTagMap({}).ValueOrDie());
auto& inputs = cc.Inputs();
inputs.Tag(kHeightTag).Value() = MakePacket<int>(1);
inputs.Tag(kWidthTag).Value() = MakePacket<int>(1);
RectSpec expectRect = {
.width = 1,
.height = 1,
.center_x = 50,
.center_y = 50,
.rotation = 0.3,
};
EXPECT_EQ(
ImageCroppingCalculator::GetCropSpecs(&cc, input_width, input_height),
expectRect);
} // TEST
// Test when RECT is set from input stream,
// and options has norm_width/height set.
// RECT from input stream should take precedence.
TEST(ImageCroppingCalculatorTest, RedundantSpecWithInputStream) {
auto calculator_node =
ParseTextProtoOrDie<mediapipe::CalculatorGraphConfig::Node>(
R"(
calculator: "ImageCroppingCalculator"
input_stream: "IMAGE_GPU:input_frames"
input_stream: "RECT:rect"
output_stream: "IMAGE_GPU:cropped_output_frames"
options: {
[mediapipe.ImageCroppingCalculatorOptions.ext] {
width: 50
height: 50
norm_width: 0.6
norm_height: 0.6
norm_center_x: 0.5
norm_center_y: 0.5
rotation: 0.3
}
}
)");
auto calculator_state =
CalculatorState("Node", 0, "Calculator", calculator_node, nullptr);
auto inputTags = tool::CreateTagMap({
"RECT:0:rect",
})
.ValueOrDie();
auto cc = CalculatorContext(&calculator_state, inputTags,
tool::CreateTagMap({}).ValueOrDie());
auto& inputs = cc.Inputs();
mediapipe::Rect rect = ParseTextProtoOrDie<mediapipe::Rect>(
R"(
width: 1 height: 1 x_center: 40 y_center: 40 rotation: 0.5
)");
inputs.Tag(kRectTag).Value() = MakePacket<mediapipe::Rect>(rect);
RectSpec expectRect = {
.width = 1,
.height = 1,
.center_x = 40,
.center_y = 40,
.rotation = 0.5,
};
EXPECT_EQ(
ImageCroppingCalculator::GetCropSpecs(&cc, input_width, input_height),
expectRect);
} // TEST
} // namespace
} // namespace mediapipe

View File

@ -21,6 +21,7 @@ filegroup(
"dino.jpg", "dino.jpg",
"dino_quality_50.jpg", "dino_quality_50.jpg",
"dino_quality_80.jpg", "dino_quality_80.jpg",
"front_camera_pixel2.jpg",
], ],
visibility = ["//visibility:public"], visibility = ["//visibility:public"],
) )

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.3 MiB

View File

@ -46,6 +46,7 @@ void AddTimedBoxProtoToRenderData(
line_annotation->mutable_color()->set_b(options.box_color().b()); line_annotation->mutable_color()->set_b(options.box_color().b());
line_annotation->set_thickness(options.thickness()); line_annotation->set_thickness(options.thickness());
RenderAnnotation::Line* line = line_annotation->mutable_line(); RenderAnnotation::Line* line = line_annotation->mutable_line();
line->set_normalized(true);
line->set_x_start(box_proto.quad().vertices(i * 2)); line->set_x_start(box_proto.quad().vertices(i * 2));
line->set_y_start(box_proto.quad().vertices(i * 2 + 1)); line->set_y_start(box_proto.quad().vertices(i * 2 + 1));
line->set_x_end(box_proto.quad().vertices(next_corner * 2)); line->set_x_end(box_proto.quad().vertices(next_corner * 2));

View File

@ -88,7 +88,8 @@ class Tvl1OpticalFlowCalculator : public CalculatorBase {
// cv::DenseOpticalFlow is not thread-safe. Invoking multiple // cv::DenseOpticalFlow is not thread-safe. Invoking multiple
// DenseOpticalFlow::calc() in parallel may lead to memory corruption or // DenseOpticalFlow::calc() in parallel may lead to memory corruption or
// memory leak. // memory leak.
std::list<cv::Ptr<cv::DenseOpticalFlow>> tvl1_computers_ GUARDED_BY(mutex_); std::list<cv::Ptr<cv::DenseOpticalFlow>> tvl1_computers_
ABSL_GUARDED_BY(mutex_);
absl::Mutex mutex_; absl::Mutex mutex_;
}; };

View File

@ -120,7 +120,7 @@ and do the model inference with the baseline model.
MediaPipe for media processing to prepare video data sets for training a MediaPipe for media processing to prepare video data sets for training a
TensorFlow model. TensorFlow model.
### Automatic video cropping ### AutoFlip - Automatic video cropping
[AutoFlip](./autoflip.md) shows how to use MediaPipe to build an automatic video [AutoFlip](./autoflip.md) shows how to use MediaPipe to build an automatic video
cropping pipeline that can convert an input video to arbitrary aspect ratios. cropping pipeline that can convert an input video to arbitrary aspect ratios.
@ -142,6 +142,7 @@ GPU with live video from a webcam.
* [Desktop GPU](./face_detection_desktop.md) * [Desktop GPU](./face_detection_desktop.md)
* [Desktop CPU](./face_detection_desktop.md) * [Desktop CPU](./face_detection_desktop.md)
### Hand Tracking on Desktop with Webcam ### Hand Tracking on Desktop with Webcam
[Hand Tracking on Desktop with Webcam](./hand_tracking_desktop.md) shows how to [Hand Tracking on Desktop with Webcam](./hand_tracking_desktop.md) shows how to
@ -184,3 +185,18 @@ EdgeTPU on
[Face Detection on Coral with Webcam](./face_detection_coral_devboard.md) shows [Face Detection on Coral with Webcam](./face_detection_coral_devboard.md) shows
how to use quantized face detection TFlite model accelerated with EdgeTPU on how to use quantized face detection TFlite model accelerated with EdgeTPU on
[Google Coral Dev Board](https://coral.withgoogle.com/products/dev-board). [Google Coral Dev Board](https://coral.withgoogle.com/products/dev-board).
## Web Browser
Below are samples that can directly be run in your web browser.
See more details in [MediaPipe on the Web](./web.md) and
[Google Developer blog post](https://mediapipe.page.link/webdevblog)
### [Face Detection In Browser](https://viz.mediapipe.dev/demo/face_detection)
### [Hand Detection In Browser](https://viz.mediapipe.dev/demo/hand_detection)
### [Hand Tracking In Browser](https://viz.mediapipe.dev/demo/hand_tracking)
### [Hair Segmentation In Browser](https://viz.mediapipe.dev/demo/hair_segmentation)

View File

@ -18,7 +18,9 @@ Note: Desktop GPU works only on Linux. Mesa drivers need to be installed. Please
see see
[step 4 of "Installing on Debian and Ubuntu" in the installation guide](./install.md). [step 4 of "Installing on Debian and Ubuntu" in the installation guide](./install.md).
Note: If MediaPipe depends on OpenCV 2, please see the [known issues with OpenCV 2](#known-issues-with-opencv-2) section. Note: If MediaPipe depends on OpenCV 2, please see the
[known issues with OpenCV 2](./object_detection_desktop.md#known-issues-with-opencv-2)
section.
### TensorFlow Lite Face Detection Demo with Webcam (CPU) ### TensorFlow Lite Face Detection Demo with Webcam (CPU)
@ -66,7 +68,8 @@ $ GLOG_logtostderr=1 bazel-bin/mediapipe/examples/desktop/face_detection/face_de
--calculator_graph_config_file=mediapipe/graphs/face_detection/face_detection_mobile_gpu.pbtxt --calculator_graph_config_file=mediapipe/graphs/face_detection/face_detection_mobile_gpu.pbtxt
``` ```
Issues running? Please first [check that your GPU is supported](gpu.md#desktop-gpu-linux). Issues running? Please first
[check that your GPU is supported](./gpu.md#desktop-gpu-linux)
#### Graph #### Graph

View File

@ -15,7 +15,9 @@ Note: Desktop GPU works only on Linux. Mesa drivers need to be installed. Please
see see
[step 4 of "Installing on Debian and Ubuntu" in the installation guide](./install.md). [step 4 of "Installing on Debian and Ubuntu" in the installation guide](./install.md).
Note: If MediaPipe depends on OpenCV 2, please see the [known issues with OpenCV 2](#known-issues-with-opencv-2) section. Note: If MediaPipe depends on OpenCV 2, please see the
[known issues with OpenCV 2](./object_detection_desktop.md#known-issues-with-opencv-2)
section.
### TensorFlow Lite Hair Segmentation Demo with Webcam (GPU) ### TensorFlow Lite Hair Segmentation Demo with Webcam (GPU)
@ -40,7 +42,8 @@ $ GLOG_logtostderr=1 bazel-bin/mediapipe/examples/desktop/hair_segmentation/hair
--calculator_graph_config_file=mediapipe/graphs/hair_segmentation/hair_segmentation_mobile_gpu.pbtxt --calculator_graph_config_file=mediapipe/graphs/hair_segmentation/hair_segmentation_mobile_gpu.pbtxt
``` ```
Issues running? Please first [check that your GPU is supported](gpu.md#desktop-gpu-linux). Issues running? Please first
[check that your GPU is supported](./gpu.md#desktop-gpu-linux)
#### Graph #### Graph

View File

@ -17,7 +17,9 @@ Note: Desktop GPU works only on Linux. Mesa drivers need to be installed. Please
see see
[step 4 of "Installing on Debian and Ubuntu" in the installation guide](./install.md). [step 4 of "Installing on Debian and Ubuntu" in the installation guide](./install.md).
Note: If MediaPipe depends on OpenCV 2, please see the [known issues with OpenCV 2](#known-issues-with-opencv-2) section. Note: If MediaPipe depends on OpenCV 2, please see the
[known issues with OpenCV 2](./object_detection_desktop.md#known-issues-with-opencv-2)
section.
### TensorFlow Lite Hand Tracking Demo with Webcam (CPU) ### TensorFlow Lite Hand Tracking Demo with Webcam (CPU)
@ -61,7 +63,8 @@ $ GLOG_logtostderr=1 bazel-bin/mediapipe/examples/desktop/hand_tracking/hand_tra
--calculator_graph_config_file=mediapipe/graphs/hand_tracking/hand_tracking_mobile.pbtxt --calculator_graph_config_file=mediapipe/graphs/hand_tracking/hand_tracking_mobile.pbtxt
``` ```
Issues running? Please first [check that your GPU is supported](gpu.md#desktop-gpu-linux). Issues running? Please first
[check that your GPU is supported](./gpu.md#desktop-gpu-linux)
#### Graph #### Graph

Binary file not shown.

After

Width:  |  Height:  |  Size: 923 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.6 KiB

View File

@ -19,7 +19,8 @@ see
[step 4 of "Installing on Debian and Ubuntu" in the installation guide](./install.md). [step 4 of "Installing on Debian and Ubuntu" in the installation guide](./install.md).
Note: If MediaPipe depends on OpenCV 2, please see the Note: If MediaPipe depends on OpenCV 2, please see the
[known issues with OpenCV 2](#known-issues-with-opencv-2) section. [known issues with OpenCV 2](./object_detection_desktop.md#known-issues-with-opencv-2)
section.
### TensorFlow Lite Multi-Hand Tracking Demo with Webcam (CPU) ### TensorFlow Lite Multi-Hand Tracking Demo with Webcam (CPU)
@ -61,7 +62,8 @@ $ GLOG_logtostderr=1 bazel-bin/mediapipe/examples/desktop/multi_hand_tracking/mu
--calculator_graph_config_file=mediapipe/graphs/hand_tracking/multi_hand_tracking_mobile.pbtxt --calculator_graph_config_file=mediapipe/graphs/hand_tracking/multi_hand_tracking_mobile.pbtxt
``` ```
Issues running? Please first [check that your GPU is supported](gpu.md#desktop-gpu-linux). Issues running? Please first
[check that your GPU is supported](./gpu.md#desktop-gpu-linux)
#### Graph #### Graph

View File

@ -7,6 +7,8 @@ in the browser client-side. The official API is under construction, but the core
technology has been proven effective, and we can already show interactive technology has been proven effective, and we can already show interactive
cross-platform demos using your live webcam. cross-platform demos using your live webcam.
[For more details, read this Google Developer blog post](https://mediapipe.page.link/webdevblog)
![image](images/web_effect.gif) ![image](images/web_segmentation.gif) ![image](images/web_effect.gif) ![image](images/web_segmentation.gif)
### Hand Tracking (with and without SIMD support) ### Hand Tracking (with and without SIMD support)
@ -21,6 +23,3 @@ support. Below are two different versions of the
1. WebAssembly MVP [demo](https://mediapipe.page.link/cds-ht) running around 5-8 frames per second on Desktop Chrome 1. WebAssembly MVP [demo](https://mediapipe.page.link/cds-ht) running around 5-8 frames per second on Desktop Chrome
2. WebAssembly SIMD [demo](https://mediapipe.page.link/cds-ht-simd) running around 15-18 frames per second on *Canary* Chrome for Desktop, which must additionally be launched with the option `--js-flags="--experimental-wasm-simd"` 2. WebAssembly SIMD [demo](https://mediapipe.page.link/cds-ht-simd) running around 15-18 frames per second on *Canary* Chrome for Desktop, which must additionally be launched with the option `--js-flags="--experimental-wasm-simd"`
NOTE: This page is a work-in-progress. More to come soon!

View File

@ -40,7 +40,7 @@ cc_library(
# Demos # Demos
cc_binary( cc_binary(
name = "object_detection_cpu", name = "object_detection_tpu",
deps = [ deps = [
"//mediapipe/examples/coral:demo_run_graph_main", "//mediapipe/examples/coral:demo_run_graph_main",
"//mediapipe/graphs/object_detection:desktop_tflite_calculators", "//mediapipe/graphs/object_detection:desktop_tflite_calculators",
@ -48,7 +48,7 @@ cc_binary(
) )
cc_binary( cc_binary(
name = "face_detection_cpu", name = "face_detection_tpu",
deps = [ deps = [
"//mediapipe/examples/coral:demo_run_graph_main", "//mediapipe/examples/coral:demo_run_graph_main",
"//mediapipe/graphs/face_detection:desktop_tflite_calculators", "//mediapipe/graphs/face_detection:desktop_tflite_calculators",

View File

@ -19,13 +19,13 @@ Docker container for building MediaPipe applications that run on Edge TPU.
* (on coral device) prepare MediaPipe * (on coral device) prepare MediaPipe
cd ~ cd ~
sudo apt-get install git sudo apt-get install -y git
git clone https://github.com/google/mediapipe.git git clone https://github.com/google/mediapipe.git
mkdir mediapipe/bazel-bin mkdir mediapipe/bazel-bin
* (on coral device) install opencv 3.2 * (on coral device) install opencv 3.2
sudo apt-get update && apt-get install -y libopencv-dev sudo apt-get update && sudo apt-get install -y libopencv-dev
* (on coral device) find all opencv libs * (on coral device) find all opencv libs
@ -78,7 +78,7 @@ Docker container for building MediaPipe applications that run on Edge TPU.
return NULL; return NULL;
* Edit /edgetpu/libedgetpu/BUILD * Edit /edgetpu/libedgetpu/BUILD
to add this build target to add this build target
@ -90,9 +90,9 @@ Docker container for building MediaPipe applications that run on Edge TPU.
visibility = ["//visibility:public"], visibility = ["//visibility:public"],
) )
* Edit *tflite_inference_calculator.cc* BUILD rules: * Edit /mediapipe/mediapipe/calculators/tflite/BUILD to change rules for *tflite_inference_calculator.cc*
sed -i 's/\":tflite_inference_calculator_cc_proto\",/\":tflite_inference_calculator_cc_proto\",\n\t\"@edgetpu\/\/:header\",\n\t\"@libedgetpu\/\/:lib\",/g' mediapipe/calculators/tflite/BUILD sed -i 's/\":tflite_inference_calculator_cc_proto\",/\":tflite_inference_calculator_cc_proto\",\n\t\"@edgetpu\/\/:header\",\n\t\"@libedgetpu\/\/:lib\",/g' /mediapipe/mediapipe/calculators/tflite/BUILD
The above command should add The above command should add
@ -105,37 +105,37 @@ Docker container for building MediaPipe applications that run on Edge TPU.
* Object detection demo * Object detection demo
bazel build -c opt --crosstool_top=@crosstool//:toolchains --compiler=gcc --cpu=aarch64 --define MEDIAPIPE_DISABLE_GPU=1 --copt -DMEDIAPIPE_EDGE_TPU --copt=-flax-vector-conversions mediapipe/examples/coral:object_detection_cpu bazel build -c opt --crosstool_top=@crosstool//:toolchains --compiler=gcc --cpu=aarch64 --define MEDIAPIPE_DISABLE_GPU=1 --copt -DMEDIAPIPE_EDGE_TPU --copt=-flax-vector-conversions mediapipe/examples/coral:object_detection_tpu
Copy object_detection_cpu binary to the MediaPipe checkout on the coral device Copy object_detection_tpu binary to the MediaPipe checkout on the coral device
# outside docker env, open new terminal on host machine # # outside docker env, open new terminal on host machine #
docker ps docker ps
docker cp <container-id>:/mediapipe/bazel-bin/mediapipe/examples/coral/object_detection_cpu /tmp/. docker cp <container-id>:/mediapipe/bazel-bin/mediapipe/examples/coral/object_detection_tpu /tmp/.
mdt push /tmp/object_detection_cpu /home/mendel/mediapipe/bazel-bin/. mdt push /tmp/object_detection_tpu /home/mendel/mediapipe/bazel-bin/.
* Face detection demo * Face detection demo
bazel build -c opt --crosstool_top=@crosstool//:toolchains --compiler=gcc --cpu=aarch64 --define MEDIAPIPE_DISABLE_GPU=1 --copt -DMEDIAPIPE_EDGE_TPU --copt=-flax-vector-conversions mediapipe/examples/coral:face_detection_cpu bazel build -c opt --crosstool_top=@crosstool//:toolchains --compiler=gcc --cpu=aarch64 --define MEDIAPIPE_DISABLE_GPU=1 --copt -DMEDIAPIPE_EDGE_TPU --copt=-flax-vector-conversions mediapipe/examples/coral:face_detection_tpu
Copy face_detection_cpu binary to the MediaPipe checkout on the coral device Copy face_detection_tpu binary to the MediaPipe checkout on the coral device
# outside docker env, open new terminal on host machine # # outside docker env, open new terminal on host machine #
docker ps docker ps
docker cp <container-id>:/mediapipe/bazel-bin/mediapipe/examples/coral/face_detection_cpu /tmp/. docker cp <container-id>:/mediapipe/bazel-bin/mediapipe/examples/coral/face_detection_tpu /tmp/.
mdt push /tmp/face_detection_cpu /home/mendel/mediapipe/bazel-bin/. mdt push /tmp/face_detection_tpu /home/mendel/mediapipe/bazel-bin/.
## On the coral device (with display) ## On the coral device (with display)
# Object detection # Object detection
cd ~/mediapipe cd ~/mediapipe
chmod +x bazel-bin/object_detection_cpu chmod +x bazel-bin/object_detection_tpu
export GLOG_logtostderr=1 export GLOG_logtostderr=1
bazel-bin/object_detection_cpu --calculator_graph_config_file=mediapipe/examples/coral/graphs/object_detection_desktop_live.pbtxt bazel-bin/object_detection_tpu --calculator_graph_config_file=mediapipe/examples/coral/graphs/object_detection_desktop_live.pbtxt
# Face detection # Face detection
cd ~/mediapipe cd ~/mediapipe
chmod +x bazel-bin/face_detection_cpu chmod +x bazel-bin/face_detection_tpu
export GLOG_logtostderr=1 export GLOG_logtostderr=1
bazel-bin/face_detection_cpu --calculator_graph_config_file=mediapipe/examples/coral/graphs/face_detection_desktop_live.pbtxt bazel-bin/face_detection_tpu --calculator_graph_config_file=mediapipe/examples/coral/graphs/face_detection_desktop_live.pbtxt

View File

@ -1,6 +1,6 @@
# MediaPipe graph that performs face detection with TensorFlow Lite on CPU. # MediaPipe graph that performs face detection with TensorFlow Lite on TPU.
# Used in the examples in # Used in the examples in
# mediapipe/examples/coral:face_detection_cpu. # mediapipe/examples/coral:face_detection_tpu.
# Images on GPU coming into and out of the graph. # Images on GPU coming into and out of the graph.
input_stream: "input_video" input_stream: "input_video"
@ -36,7 +36,7 @@ node {
node: { node: {
calculator: "ImageTransformationCalculator" calculator: "ImageTransformationCalculator"
input_stream: "IMAGE:throttled_input_video" input_stream: "IMAGE:throttled_input_video"
output_stream: "IMAGE:transformed_input_video_cpu" output_stream: "IMAGE:transformed_input_video"
output_stream: "LETTERBOX_PADDING:letterbox_padding" output_stream: "LETTERBOX_PADDING:letterbox_padding"
options: { options: {
[mediapipe.ImageTransformationCalculatorOptions.ext] { [mediapipe.ImageTransformationCalculatorOptions.ext] {
@ -51,7 +51,7 @@ node: {
# TfLiteTensor. # TfLiteTensor.
node { node {
calculator: "TfLiteConverterCalculator" calculator: "TfLiteConverterCalculator"
input_stream: "IMAGE:transformed_input_video_cpu" input_stream: "IMAGE:transformed_input_video"
output_stream: "TENSORS:image_tensor" output_stream: "TENSORS:image_tensor"
options: { options: {
[mediapipe.TfLiteConverterCalculatorOptions.ext] { [mediapipe.TfLiteConverterCalculatorOptions.ext] {
@ -60,7 +60,7 @@ node {
} }
} }
# Runs a TensorFlow Lite model on CPU that takes an image tensor and outputs a # Runs a TensorFlow Lite model on TPU that takes an image tensor and outputs a
# vector of tensors representing, for instance, detection boxes/keypoints and # vector of tensors representing, for instance, detection boxes/keypoints and
# scores. # scores.
node { node {

View File

@ -1,8 +1,8 @@
# MediaPipe graph that performs object detection with TensorFlow Lite on CPU. # MediaPipe graph that performs object detection with TensorFlow Lite on TPU.
# Used in the examples in # Used in the examples in
# mediapipie/examples/coral:object_detection_cpu. # mediapipie/examples/coral:object_detection_tpu.
# Images on CPU coming into and out of the graph. # Images on TPU coming into and out of the graph.
input_stream: "input_video" input_stream: "input_video"
output_stream: "output_video" output_stream: "output_video"
@ -30,7 +30,7 @@ node {
output_stream: "throttled_input_video" output_stream: "throttled_input_video"
} }
# Transforms the input image on CPU to a 320x320 image. To scale the image, by # Transforms the input image on CPU to a 300x300 image. To scale the image, by
# default it uses the STRETCH scale mode that maps the entire input image to the # default it uses the STRETCH scale mode that maps the entire input image to the
# entire transformed image. As a result, image aspect ratio may be changed and # entire transformed image. As a result, image aspect ratio may be changed and
# objects in the image may be deformed (stretched or squeezed), but the object # objects in the image may be deformed (stretched or squeezed), but the object
@ -60,7 +60,7 @@ node {
} }
} }
# Runs a TensorFlow Lite model on CPU that takes an image tensor and outputs a # Runs a TensorFlow Lite model on TPU that takes an image tensor and outputs a
# vector of tensors representing, for instance, detection boxes/keypoints and # vector of tensors representing, for instance, detection boxes/keypoints and
# scores. # scores.
node { node {

View File

@ -25,7 +25,7 @@ import sys
from absl import app from absl import app
import tensorflow.compat.v1 as tf import tensorflow.compat.v1 as tf
from tensorflow.python.tools import freeze_graph from tensorflow.compat.v1.python.tools import freeze_graph
BASE_DIR = '/tmp/mediapipe/' BASE_DIR = '/tmp/mediapipe/'

View File

@ -536,6 +536,7 @@ cc_library(
"//mediapipe/framework/profiler:graph_profiler", "//mediapipe/framework/profiler:graph_profiler",
"//mediapipe/framework/stream_handler:default_input_stream_handler", "//mediapipe/framework/stream_handler:default_input_stream_handler",
"//mediapipe/framework/stream_handler:in_order_output_stream_handler", "//mediapipe/framework/stream_handler:in_order_output_stream_handler",
"//mediapipe/framework/tool:name_util",
"//mediapipe/framework/tool:status_util", "//mediapipe/framework/tool:status_util",
"//mediapipe/framework/tool:tag_map", "//mediapipe/framework/tool:tag_map",
"//mediapipe/framework/tool:validate_name", "//mediapipe/framework/tool:validate_name",
@ -1268,6 +1269,7 @@ cc_library(
"//mediapipe/framework/port:source_location", "//mediapipe/framework/port:source_location",
"//mediapipe/framework/port:status", "//mediapipe/framework/port:status",
"//mediapipe/framework/port:topologicalsorter", "//mediapipe/framework/port:topologicalsorter",
"//mediapipe/framework/tool:name_util",
"//mediapipe/framework/tool:status_util", "//mediapipe/framework/tool:status_util",
"//mediapipe/framework/tool:subgraph_expansion", "//mediapipe/framework/tool:subgraph_expansion",
"//mediapipe/framework/tool:validate", "//mediapipe/framework/tool:validate",

View File

@ -50,7 +50,7 @@ class CalculatorContextManager {
setup_shards_callback); setup_shards_callback);
// Invoked by CalculatorNode::CleanupAfterRun(). // Invoked by CalculatorNode::CleanupAfterRun().
void CleanupAfterRun() LOCKS_EXCLUDED(contexts_mutex_); void CleanupAfterRun() ABSL_LOCKS_EXCLUDED(contexts_mutex_);
// Returns true if the default calculator context has been initialized. // Returns true if the default calculator context has been initialized.
bool HasDefaultCalculatorContext() const { bool HasDefaultCalculatorContext() const {
@ -66,7 +66,7 @@ class CalculatorContextManager {
// The input timestamp of the calculator context is returned in // The input timestamp of the calculator context is returned in
// *context_input_timestamp. // *context_input_timestamp.
CalculatorContext* GetFrontCalculatorContext( CalculatorContext* GetFrontCalculatorContext(
Timestamp* context_input_timestamp) LOCKS_EXCLUDED(contexts_mutex_); Timestamp* context_input_timestamp) ABSL_LOCKS_EXCLUDED(contexts_mutex_);
// For sequential execution, returns a pointer to the default calculator // For sequential execution, returns a pointer to the default calculator
// context. For parallel execution, creates or reuses a calculator context, // context. For parallel execution, creates or reuses a calculator context,
@ -75,16 +75,16 @@ class CalculatorContextManager {
// The ownership of the calculator context object isn't tranferred to the // The ownership of the calculator context object isn't tranferred to the
// caller. // caller.
CalculatorContext* PrepareCalculatorContext(Timestamp input_timestamp) CalculatorContext* PrepareCalculatorContext(Timestamp input_timestamp)
LOCKS_EXCLUDED(contexts_mutex_); ABSL_LOCKS_EXCLUDED(contexts_mutex_);
// Removes the context with the smallest input timestamp from active_contexts_ // Removes the context with the smallest input timestamp from active_contexts_
// and moves the calculator context to idle_contexts_. The caller must // and moves the calculator context to idle_contexts_. The caller must
// guarantee that the output shards in the calculator context have been // guarantee that the output shards in the calculator context have been
// propagated before calling this function. // propagated before calling this function.
void RecycleCalculatorContext() LOCKS_EXCLUDED(contexts_mutex_); void RecycleCalculatorContext() ABSL_LOCKS_EXCLUDED(contexts_mutex_);
// Returns true if active_contexts_ is non-empty. // Returns true if active_contexts_ is non-empty.
bool HasActiveContexts() LOCKS_EXCLUDED(contexts_mutex_); bool HasActiveContexts() ABSL_LOCKS_EXCLUDED(contexts_mutex_);
int NumberOfContextTimestamps( int NumberOfContextTimestamps(
const CalculatorContext& calculator_context) const { const CalculatorContext& calculator_context) const {
@ -135,10 +135,10 @@ class CalculatorContextManager {
absl::Mutex contexts_mutex_; absl::Mutex contexts_mutex_;
// A map from input timestamps to calculator contexts. // A map from input timestamps to calculator contexts.
std::map<Timestamp, std::unique_ptr<CalculatorContext>> active_contexts_ std::map<Timestamp, std::unique_ptr<CalculatorContext>> active_contexts_
GUARDED_BY(contexts_mutex_); ABSL_GUARDED_BY(contexts_mutex_);
// Idle calculator contexts that are ready for reuse. // Idle calculator contexts that are ready for reuse.
std::deque<std::unique_ptr<CalculatorContext>> idle_contexts_ std::deque<std::unique_ptr<CalculatorContext>> idle_contexts_
GUARDED_BY(contexts_mutex_); ABSL_GUARDED_BY(contexts_mutex_);
}; };
} // namespace mediapipe } // namespace mediapipe

View File

@ -127,7 +127,13 @@ CalculatorGraph::CalculatorGraph(const CalculatorGraphConfig& config)
// Defining the destructor here lets us use incomplete types in the header; // Defining the destructor here lets us use incomplete types in the header;
// they only need to be fully visible here, where their destructor is // they only need to be fully visible here, where their destructor is
// instantiated. // instantiated.
CalculatorGraph::~CalculatorGraph() {} CalculatorGraph::~CalculatorGraph() {
// Stop periodic profiler output to ublock Executor destructors.
::mediapipe::Status status = profiler()->Stop();
if (!status.ok()) {
LOG(ERROR) << "During graph destruction: " << status;
}
}
::mediapipe::Status CalculatorGraph::InitializePacketGeneratorGraph( ::mediapipe::Status CalculatorGraph::InitializePacketGeneratorGraph(
const std::map<std::string, Packet>& side_packets) { const std::map<std::string, Packet>& side_packets) {

View File

@ -298,7 +298,7 @@ class CalculatorGraph {
// Callback when an error is encountered. // Callback when an error is encountered.
// Adds the error to the vector of errors. // Adds the error to the vector of errors.
void RecordError(const ::mediapipe::Status& error) void RecordError(const ::mediapipe::Status& error)
LOCKS_EXCLUDED(error_mutex_); ABSL_LOCKS_EXCLUDED(error_mutex_);
// Returns the maximum input stream queue size. // Returns the maximum input stream queue size.
int GetMaxInputStreamQueueSize(); int GetMaxInputStreamQueueSize();
@ -339,13 +339,14 @@ class CalculatorGraph {
// Returns true if this node or graph input stream is connected to // Returns true if this node or graph input stream is connected to
// any input stream whose queue has hit maximum capacity. // any input stream whose queue has hit maximum capacity.
bool IsNodeThrottled(int node_id) LOCKS_EXCLUDED(full_input_streams_mutex_); bool IsNodeThrottled(int node_id)
ABSL_LOCKS_EXCLUDED(full_input_streams_mutex_);
// If any active source node or graph input stream is throttled and not yet // If any active source node or graph input stream is throttled and not yet
// closed, increases the max_queue_size for each full input stream in the // closed, increases the max_queue_size for each full input stream in the
// graph. // graph.
// Returns true if at least one max_queue_size has been grown. // Returns true if at least one max_queue_size has been grown.
bool UnthrottleSources() LOCKS_EXCLUDED(full_input_streams_mutex_); bool UnthrottleSources() ABSL_LOCKS_EXCLUDED(full_input_streams_mutex_);
// Returns the scheduler's runtime measures for overhead measurement. // Returns the scheduler's runtime measures for overhead measurement.
// Only meant for test purposes. // Only meant for test purposes.
@ -498,7 +499,7 @@ class CalculatorGraph {
// handler fails, it appends its error to errors_, and CleanupAfterRun sets // handler fails, it appends its error to errors_, and CleanupAfterRun sets
// |*status| to the new combined errors on return. // |*status| to the new combined errors on return.
void CleanupAfterRun(::mediapipe::Status* status) void CleanupAfterRun(::mediapipe::Status* status)
LOCKS_EXCLUDED(error_mutex_); ABSL_LOCKS_EXCLUDED(error_mutex_);
// Combines errors into a status. Returns true if the vector of errors is // Combines errors into a status. Returns true if the vector of errors is
// non-empty. // non-empty.
@ -571,7 +572,7 @@ class CalculatorGraph {
// Mode for adding packets to a graph input stream. Set to block until all // Mode for adding packets to a graph input stream. Set to block until all
// affected input streams are not full by default. // affected input streams are not full by default.
GraphInputStreamAddMode graph_input_stream_add_mode_ GraphInputStreamAddMode graph_input_stream_add_mode_
GUARDED_BY(full_input_streams_mutex_); ABSL_GUARDED_BY(full_input_streams_mutex_);
// For a source node or graph input stream (specified using id), // For a source node or graph input stream (specified using id),
// this stores the set of dependent input streams that have hit their // this stores the set of dependent input streams that have hit their
@ -580,7 +581,7 @@ class CalculatorGraph {
// is added to a graph input stream only if this set is empty. // is added to a graph input stream only if this set is empty.
// Note that this vector contains an unused entry for each non-source node. // Note that this vector contains an unused entry for each non-source node.
std::vector<absl::flat_hash_set<InputStreamManager*>> full_input_streams_ std::vector<absl::flat_hash_set<InputStreamManager*>> full_input_streams_
GUARDED_BY(full_input_streams_mutex_); ABSL_GUARDED_BY(full_input_streams_mutex_);
// Maps stream names to graph input stream objects. // Maps stream names to graph input stream objects.
absl::flat_hash_map<std::string, std::unique_ptr<GraphInputStream>> absl::flat_hash_map<std::string, std::unique_ptr<GraphInputStream>>
@ -606,7 +607,7 @@ class CalculatorGraph {
// Vector of errors encountered while running graph. Always use RecordError() // Vector of errors encountered while running graph. Always use RecordError()
// to add an error to this vector. // to add an error to this vector.
std::vector<::mediapipe::Status> errors_ GUARDED_BY(error_mutex_); std::vector<::mediapipe::Status> errors_ ABSL_GUARDED_BY(error_mutex_);
// True if the default executor uses the application thread. // True if the default executor uses the application thread.
bool use_application_thread_ = false; bool use_application_thread_ = false;
@ -614,7 +615,7 @@ class CalculatorGraph {
// Condition variable that waits until all input streams that depend on a // Condition variable that waits until all input streams that depend on a
// graph input stream are below the maximum queue size. // graph input stream are below the maximum queue size.
absl::CondVar wait_to_add_packet_cond_var_ absl::CondVar wait_to_add_packet_cond_var_
GUARDED_BY(full_input_streams_mutex_); ABSL_GUARDED_BY(full_input_streams_mutex_);
// Mutex for the vector of errors. // Mutex for the vector of errors.
absl::Mutex error_mutex_; absl::Mutex error_mutex_;

View File

@ -44,7 +44,7 @@ class CalculatorGraphEventLoopTest : public testing::Test {
} }
protected: protected:
std::vector<Packet> output_packets_ GUARDED_BY(output_packets_mutex_); std::vector<Packet> output_packets_ ABSL_GUARDED_BY(output_packets_mutex_);
absl::Mutex output_packets_mutex_; absl::Mutex output_packets_mutex_;
}; };

View File

@ -42,6 +42,7 @@
#include "mediapipe/framework/port/source_location.h" #include "mediapipe/framework/port/source_location.h"
#include "mediapipe/framework/port/status_builder.h" #include "mediapipe/framework/port/status_builder.h"
#include "mediapipe/framework/timestamp.h" #include "mediapipe/framework/timestamp.h"
#include "mediapipe/framework/tool/name_util.h"
#include "mediapipe/framework/tool/status_util.h" #include "mediapipe/framework/tool/status_util.h"
#include "mediapipe/framework/tool/tag_map.h" #include "mediapipe/framework/tool/tag_map.h"
#include "mediapipe/framework/tool/validate_name.h" #include "mediapipe/framework/tool/validate_name.h"
@ -85,7 +86,7 @@ Timestamp CalculatorNode::SourceProcessOrder(
const CalculatorGraphConfig::Node& node_config = const CalculatorGraphConfig::Node& node_config =
validated_graph_->Config().node(node_id_); validated_graph_->Config().node(node_id_);
name_ = CanonicalNodeName(validated_graph_->Config(), node_id_); name_ = tool::CanonicalNodeName(validated_graph_->Config(), node_id_);
max_in_flight_ = node_config.max_in_flight(); max_in_flight_ = node_config.max_in_flight();
max_in_flight_ = max_in_flight_ ? max_in_flight_ : 1; max_in_flight_ = max_in_flight_ ? max_in_flight_ : 1;

View File

@ -128,25 +128,25 @@ class CalculatorNode {
std::function<void()> source_node_opened_callback, std::function<void()> source_node_opened_callback,
std::function<void(CalculatorContext*)> schedule_callback, std::function<void(CalculatorContext*)> schedule_callback,
std::function<void(::mediapipe::Status)> error_callback, std::function<void(::mediapipe::Status)> error_callback,
CounterFactory* counter_factory) LOCKS_EXCLUDED(status_mutex_); CounterFactory* counter_factory) ABSL_LOCKS_EXCLUDED(status_mutex_);
// Opens the node. // Opens the node.
::mediapipe::Status OpenNode() LOCKS_EXCLUDED(status_mutex_); ::mediapipe::Status OpenNode() ABSL_LOCKS_EXCLUDED(status_mutex_);
// Called when a source node's layer becomes active. // Called when a source node's layer becomes active.
void ActivateNode() LOCKS_EXCLUDED(status_mutex_); void ActivateNode() ABSL_LOCKS_EXCLUDED(status_mutex_);
// Cleans up the node after the CalculatorGraph has been run. Deletes // Cleans up the node after the CalculatorGraph has been run. Deletes
// the Calculator managed by this node. graph_status is the status of // the Calculator managed by this node. graph_status is the status of
// the graph run. // the graph run.
void CleanupAfterRun(const ::mediapipe::Status& graph_status) void CleanupAfterRun(const ::mediapipe::Status& graph_status)
LOCKS_EXCLUDED(status_mutex_); ABSL_LOCKS_EXCLUDED(status_mutex_);
// Returns true iff PrepareForRun() has been called (and types verified). // Returns true iff PrepareForRun() has been called (and types verified).
bool Prepared() const LOCKS_EXCLUDED(status_mutex_); bool Prepared() const ABSL_LOCKS_EXCLUDED(status_mutex_);
// Returns true iff Open() has been called on the calculator. // Returns true iff Open() has been called on the calculator.
bool Opened() const LOCKS_EXCLUDED(status_mutex_); bool Opened() const ABSL_LOCKS_EXCLUDED(status_mutex_);
// Returns true iff a source calculator's layer is active. // Returns true iff a source calculator's layer is active.
bool Active() const LOCKS_EXCLUDED(status_mutex_); bool Active() const ABSL_LOCKS_EXCLUDED(status_mutex_);
// Returns true iff Close() has been called on the calculator. // Returns true iff Close() has been called on the calculator.
bool Closed() const LOCKS_EXCLUDED(status_mutex_); bool Closed() const ABSL_LOCKS_EXCLUDED(status_mutex_);
// Returns true iff this is a source node. // Returns true iff this is a source node.
// //
@ -166,32 +166,32 @@ class CalculatorNode {
// then call EndScheduling when finished running it. // then call EndScheduling when finished running it.
// If false is returned, the scheduler must not execute the node. // If false is returned, the scheduler must not execute the node.
// This method is thread-safe. // This method is thread-safe.
bool TryToBeginScheduling() LOCKS_EXCLUDED(status_mutex_); bool TryToBeginScheduling() ABSL_LOCKS_EXCLUDED(status_mutex_);
// Subtracts one from current_in_flight_ to allow a new invocation to be // Subtracts one from current_in_flight_ to allow a new invocation to be
// scheduled. Then, it checks scheduling_state_ and invokes SchedulingLoop() // scheduled. Then, it checks scheduling_state_ and invokes SchedulingLoop()
// if necessary. This method is thread-safe. // if necessary. This method is thread-safe.
// TODO: this could be done implicitly by the call to ProcessNode // TODO: this could be done implicitly by the call to ProcessNode
// or CloseNode. // or CloseNode.
void EndScheduling() LOCKS_EXCLUDED(status_mutex_); void EndScheduling() ABSL_LOCKS_EXCLUDED(status_mutex_);
// Returns true if OpenNode() can be scheduled. // Returns true if OpenNode() can be scheduled.
bool ReadyForOpen() const LOCKS_EXCLUDED(status_mutex_); bool ReadyForOpen() const ABSL_LOCKS_EXCLUDED(status_mutex_);
// Called by the InputStreamHandler when all the input stream headers // Called by the InputStreamHandler when all the input stream headers
// become available. // become available.
void InputStreamHeadersReady() LOCKS_EXCLUDED(status_mutex_); void InputStreamHeadersReady() ABSL_LOCKS_EXCLUDED(status_mutex_);
// Called by the InputSidePacketHandler when all the input side packets // Called by the InputSidePacketHandler when all the input side packets
// become available. // become available.
void InputSidePacketsReady() LOCKS_EXCLUDED(status_mutex_); void InputSidePacketsReady() ABSL_LOCKS_EXCLUDED(status_mutex_);
// Checks scheduling_state_, and then invokes SchedulingLoop() if necessary. // Checks scheduling_state_, and then invokes SchedulingLoop() if necessary.
// This method is thread-safe. // This method is thread-safe.
void CheckIfBecameReady() LOCKS_EXCLUDED(status_mutex_); void CheckIfBecameReady() ABSL_LOCKS_EXCLUDED(status_mutex_);
// Called by SchedulerQueue when a node is opened. // Called by SchedulerQueue when a node is opened.
void NodeOpened() LOCKS_EXCLUDED(status_mutex_); void NodeOpened() ABSL_LOCKS_EXCLUDED(status_mutex_);
// Returns whether this is a GPU calculator node. // Returns whether this is a GPU calculator node.
bool UsesGpu() const { return uses_gpu_; } bool UsesGpu() const { return uses_gpu_; }
@ -220,7 +220,7 @@ class CalculatorNode {
// indicates whether the graph run has ended. // indicates whether the graph run has ended.
::mediapipe::Status CloseNode(const ::mediapipe::Status& graph_status, ::mediapipe::Status CloseNode(const ::mediapipe::Status& graph_status,
bool graph_run_ended) bool graph_run_ended)
LOCKS_EXCLUDED(status_mutex_); ABSL_LOCKS_EXCLUDED(status_mutex_);
// Returns a pointer to the default calculator context that is used for // Returns a pointer to the default calculator context that is used for
// sequential execution. A source node should always reuse its default // sequential execution. A source node should always reuse its default
@ -274,9 +274,9 @@ class CalculatorNode {
void SchedulingLoop(); void SchedulingLoop();
// Closes the input and output streams. // Closes the input and output streams.
void CloseInputStreams() LOCKS_EXCLUDED(status_mutex_); void CloseInputStreams() ABSL_LOCKS_EXCLUDED(status_mutex_);
void CloseOutputStreams(OutputStreamShardSet* outputs) void CloseOutputStreams(OutputStreamShardSet* outputs)
LOCKS_EXCLUDED(status_mutex_); ABSL_LOCKS_EXCLUDED(status_mutex_);
// Get a std::string describing the input streams. // Get a std::string describing the input streams.
std::string DebugInputStreamNames() const; std::string DebugInputStreamNames() const;
@ -304,7 +304,7 @@ class CalculatorNode {
kStateActive = 3, kStateActive = 3,
kStateClosed = 4 kStateClosed = 4
}; };
NodeStatus status_ GUARDED_BY(status_mutex_){kStateUninitialized}; NodeStatus status_ ABSL_GUARDED_BY(status_mutex_){kStateUninitialized};
// The max number of invocations that can be scheduled in parallel. // The max number of invocations that can be scheduled in parallel.
int max_in_flight_ = 1; int max_in_flight_ = 1;
@ -312,7 +312,7 @@ class CalculatorNode {
// scheduling. // scheduling.
// //
// The number of invocations that are scheduled but not finished. // The number of invocations that are scheduled but not finished.
int current_in_flight_ GUARDED_BY(status_mutex_) = 0; int current_in_flight_ ABSL_GUARDED_BY(status_mutex_) = 0;
// SchedulingState incidates the current state of the node scheduling process. // SchedulingState incidates the current state of the node scheduling process.
// There are four possible transitions: // There are four possible transitions:
// (a) From kIdle to kScheduling. // (a) From kIdle to kScheduling.
@ -333,14 +333,15 @@ class CalculatorNode {
kScheduling = 1, // kScheduling = 1, //
kSchedulingPending = 2 kSchedulingPending = 2
}; };
SchedulingState scheduling_state_ GUARDED_BY(status_mutex_) = kIdle; SchedulingState scheduling_state_ ABSL_GUARDED_BY(status_mutex_) = kIdle;
std::function<void()> ready_for_open_callback_; std::function<void()> ready_for_open_callback_;
std::function<void()> source_node_opened_callback_; std::function<void()> source_node_opened_callback_;
bool input_stream_headers_ready_called_ GUARDED_BY(status_mutex_) = false; bool input_stream_headers_ready_called_ ABSL_GUARDED_BY(status_mutex_) =
bool input_side_packets_ready_called_ GUARDED_BY(status_mutex_) = false; false;
bool input_stream_headers_ready_ GUARDED_BY(status_mutex_) = false; bool input_side_packets_ready_called_ ABSL_GUARDED_BY(status_mutex_) = false;
bool input_side_packets_ready_ GUARDED_BY(status_mutex_) = false; bool input_stream_headers_ready_ ABSL_GUARDED_BY(status_mutex_) = false;
bool input_side_packets_ready_ ABSL_GUARDED_BY(status_mutex_) = false;
// Owns and manages all CalculatorContext objects. // Owns and manages all CalculatorContext objects.
CalculatorContextManager calculator_context_manager_; CalculatorContextManager calculator_context_manager_;

View File

@ -85,7 +85,7 @@ class ParallelExecutionTest : public testing::Test {
} }
protected: protected:
std::vector<Packet> output_packets_ GUARDED_BY(output_packets_mutex_); std::vector<Packet> output_packets_ ABSL_GUARDED_BY(output_packets_mutex_);
absl::Mutex output_packets_mutex_; absl::Mutex output_packets_mutex_;
}; };

View File

@ -29,35 +29,35 @@ class BasicCounter : public Counter {
public: public:
explicit BasicCounter(const std::string& name) : value_(0) {} explicit BasicCounter(const std::string& name) : value_(0) {}
void Increment() LOCKS_EXCLUDED(mu_) override { void Increment() ABSL_LOCKS_EXCLUDED(mu_) override {
absl::WriterMutexLock lock(&mu_); absl::WriterMutexLock lock(&mu_);
++value_; ++value_;
} }
void IncrementBy(int amount) LOCKS_EXCLUDED(mu_) override { void IncrementBy(int amount) ABSL_LOCKS_EXCLUDED(mu_) override {
absl::WriterMutexLock lock(&mu_); absl::WriterMutexLock lock(&mu_);
value_ += amount; value_ += amount;
} }
int64 Get() LOCKS_EXCLUDED(mu_) override { int64 Get() ABSL_LOCKS_EXCLUDED(mu_) override {
absl::ReaderMutexLock lock(&mu_); absl::ReaderMutexLock lock(&mu_);
return value_; return value_;
} }
private: private:
absl::Mutex mu_; absl::Mutex mu_;
int64 value_ GUARDED_BY(mu_); int64 value_ ABSL_GUARDED_BY(mu_);
}; };
} // namespace } // namespace
CounterSet::CounterSet() {} CounterSet::CounterSet() {}
CounterSet::~CounterSet() LOCKS_EXCLUDED(mu_) { PublishCounters(); } CounterSet::~CounterSet() ABSL_LOCKS_EXCLUDED(mu_) { PublishCounters(); }
void CounterSet::PublishCounters() LOCKS_EXCLUDED(mu_) {} void CounterSet::PublishCounters() ABSL_LOCKS_EXCLUDED(mu_) {}
void CounterSet::PrintCounters() LOCKS_EXCLUDED(mu_) { void CounterSet::PrintCounters() ABSL_LOCKS_EXCLUDED(mu_) {
absl::ReaderMutexLock lock(&mu_); absl::ReaderMutexLock lock(&mu_);
LOG_IF(INFO, !counters_.empty()) << "MediaPipe Counters:"; LOG_IF(INFO, !counters_.empty()) << "MediaPipe Counters:";
for (const auto& counter : counters_) { for (const auto& counter : counters_) {
@ -65,7 +65,7 @@ void CounterSet::PrintCounters() LOCKS_EXCLUDED(mu_) {
} }
} }
Counter* CounterSet::Get(const std::string& name) LOCKS_EXCLUDED(mu_) { Counter* CounterSet::Get(const std::string& name) ABSL_LOCKS_EXCLUDED(mu_) {
absl::ReaderMutexLock lock(&mu_); absl::ReaderMutexLock lock(&mu_);
if (!::mediapipe::ContainsKey(counters_, name)) { if (!::mediapipe::ContainsKey(counters_, name)) {
return nullptr; return nullptr;
@ -74,7 +74,7 @@ Counter* CounterSet::Get(const std::string& name) LOCKS_EXCLUDED(mu_) {
} }
std::map<std::string, int64> CounterSet::GetCountersValues() std::map<std::string, int64> CounterSet::GetCountersValues()
LOCKS_EXCLUDED(mu_) { ABSL_LOCKS_EXCLUDED(mu_) {
absl::ReaderMutexLock lock(&mu_); absl::ReaderMutexLock lock(&mu_);
std::map<std::string, int64> result; std::map<std::string, int64> result;
for (const auto& it : counters_) { for (const auto& it : counters_) {

View File

@ -51,7 +51,7 @@ class CounterSet {
// to the existing pointer. // to the existing pointer.
template <typename CounterType, typename... Args> template <typename CounterType, typename... Args>
Counter* Emplace(const std::string& name, Args&&... args) Counter* Emplace(const std::string& name, Args&&... args)
LOCKS_EXCLUDED(mu_) { ABSL_LOCKS_EXCLUDED(mu_) {
absl::WriterMutexLock lock(&mu_); absl::WriterMutexLock lock(&mu_);
std::unique_ptr<Counter>* existing_counter = FindOrNull(counters_, name); std::unique_ptr<Counter>* existing_counter = FindOrNull(counters_, name);
if (existing_counter) { if (existing_counter) {
@ -66,11 +66,12 @@ class CounterSet {
Counter* Get(const std::string& name); Counter* Get(const std::string& name);
// Retrieves all counters names and current values from the internal map. // Retrieves all counters names and current values from the internal map.
std::map<std::string, int64> GetCountersValues() LOCKS_EXCLUDED(mu_); std::map<std::string, int64> GetCountersValues() ABSL_LOCKS_EXCLUDED(mu_);
private: private:
absl::Mutex mu_; absl::Mutex mu_;
std::map<std::string, std::unique_ptr<Counter>> counters_ GUARDED_BY(mu_); std::map<std::string, std::unique_ptr<Counter>> counters_
ABSL_GUARDED_BY(mu_);
}; };
// Generic counter factory // Generic counter factory

View File

@ -175,6 +175,7 @@ cc_library(
name = "random", name = "random",
hdrs = ["random_base.h"], hdrs = ["random_base.h"],
visibility = ["//visibility:public"], visibility = ["//visibility:public"],
deps = ["//mediapipe/framework/port:integral_types"],
) )
cc_library( cc_library(

View File

@ -33,7 +33,7 @@ struct MonotonicClock::State {
Clock* raw_clock; Clock* raw_clock;
absl::Mutex lock; absl::Mutex lock;
// The largest time ever returned by Now(). // The largest time ever returned by Now().
absl::Time max_time GUARDED_BY(lock); absl::Time max_time ABSL_GUARDED_BY(lock);
explicit State(Clock* clock) explicit State(Clock* clock)
: raw_clock(clock), max_time(absl::UnixEpoch()) {} : raw_clock(clock), max_time(absl::UnixEpoch()) {}
}; };
@ -171,13 +171,13 @@ class MonotonicClockImpl : public MonotonicClock {
// last_raw_time_ remembers the last value obtained from raw_clock_. // last_raw_time_ remembers the last value obtained from raw_clock_.
// It prevents spurious calls to ReportCorrection when time moves // It prevents spurious calls to ReportCorrection when time moves
// forward by a smaller amount than a prior backward jump. // forward by a smaller amount than a prior backward jump.
absl::Time last_raw_time_ GUARDED_BY(state_->lock); absl::Time last_raw_time_ ABSL_GUARDED_BY(state_->lock);
// Variables that keep track of time corrections made by this instance of // Variables that keep track of time corrections made by this instance of
// MonotonicClock. (All such metrics are instance-local for reasons // MonotonicClock. (All such metrics are instance-local for reasons
// described earlier.) // described earlier.)
int correction_count_ GUARDED_BY(state_->lock); int correction_count_ ABSL_GUARDED_BY(state_->lock);
absl::Duration max_correction_ GUARDED_BY(state_->lock); absl::Duration max_correction_ ABSL_GUARDED_BY(state_->lock);
}; };
// Factory methods. // Factory methods.

View File

@ -456,7 +456,7 @@ class ClockFrenzy {
// Provide a lock to avoid race conditions in non-threadsafe ACMRandom. // Provide a lock to avoid race conditions in non-threadsafe ACMRandom.
mutable absl::Mutex lock_; mutable absl::Mutex lock_;
std::unique_ptr<RandomEngine> random_ GUARDED_BY(lock_); std::unique_ptr<RandomEngine> random_ ABSL_GUARDED_BY(lock_);
// The stopping notification. // The stopping notification.
bool running_; bool running_;

View File

@ -15,6 +15,8 @@
#ifndef MEDIAPIPE_DEPS_RANDOM_BASE_H_ #ifndef MEDIAPIPE_DEPS_RANDOM_BASE_H_
#define MEDIAPIPE_DEPS_RANDOM_BASE_H_ #define MEDIAPIPE_DEPS_RANDOM_BASE_H_
#include "mediapipe/framework/port/integral_types.h"
class RandomBase { class RandomBase {
public: public:
// constructors. Don't do too much. // constructors. Don't do too much.
@ -22,7 +24,8 @@ class RandomBase {
virtual ~RandomBase(); virtual ~RandomBase();
virtual float RandFloat() { return 0; } virtual float RandFloat() { return 0; }
virtual int UnbiasedUniform(int n) { return n - 1; } virtual int UnbiasedUniform(int n) { return 0; }
virtual uint64 UnbiasedUniform64(uint64 n) { return 0; }
}; };
#endif // MEDIAPIPE_DEPS_RANDOM_BASE_H_ #endif // MEDIAPIPE_DEPS_RANDOM_BASE_H_

View File

@ -159,7 +159,7 @@ class FunctionRegistry {
FunctionRegistry& operator=(const FunctionRegistry&) = delete; FunctionRegistry& operator=(const FunctionRegistry&) = delete;
RegistrationToken Register(const std::string& name, Function func) RegistrationToken Register(const std::string& name, Function func)
LOCKS_EXCLUDED(lock_) { ABSL_LOCKS_EXCLUDED(lock_) {
std::string normalized_name = GetNormalizedName(name); std::string normalized_name = GetNormalizedName(name);
absl::WriterMutexLock lock(&lock_); absl::WriterMutexLock lock(&lock_);
std::string adjusted_name = GetAdjustedName(normalized_name); std::string adjusted_name = GetAdjustedName(normalized_name);
@ -189,7 +189,7 @@ class FunctionRegistry {
std::tuple<Args...>>::value, std::tuple<Args...>>::value,
int> = 0> int> = 0>
ReturnType Invoke(const std::string& name, Args2&&... args) ReturnType Invoke(const std::string& name, Args2&&... args)
LOCKS_EXCLUDED(lock_) { ABSL_LOCKS_EXCLUDED(lock_) {
Function function; Function function;
{ {
absl::ReaderMutexLock lock(&lock_); absl::ReaderMutexLock lock(&lock_);
@ -207,14 +207,14 @@ class FunctionRegistry {
// Namespaces in |name| and |ns| are separated by kNameSep. // Namespaces in |name| and |ns| are separated by kNameSep.
template <typename... Args2> template <typename... Args2>
ReturnType Invoke(const std::string& ns, const std::string& name, ReturnType Invoke(const std::string& ns, const std::string& name,
Args2&&... args) LOCKS_EXCLUDED(lock_) { Args2&&... args) ABSL_LOCKS_EXCLUDED(lock_) {
return Invoke(GetQualifiedName(ns, name), args...); return Invoke(GetQualifiedName(ns, name), args...);
} }
// Note that it's possible for registered implementations to be subsequently // Note that it's possible for registered implementations to be subsequently
// unregistered, though this will never happen with registrations made via // unregistered, though this will never happen with registrations made via
// MEDIAPIPE_REGISTER_FACTORY_FUNCTION. // MEDIAPIPE_REGISTER_FACTORY_FUNCTION.
bool IsRegistered(const std::string& name) const LOCKS_EXCLUDED(lock_) { bool IsRegistered(const std::string& name) const ABSL_LOCKS_EXCLUDED(lock_) {
absl::ReaderMutexLock lock(&lock_); absl::ReaderMutexLock lock(&lock_);
return functions_.count(name) != 0; return functions_.count(name) != 0;
} }
@ -222,7 +222,7 @@ class FunctionRegistry {
// Returns true if the specified factory function is available. // Returns true if the specified factory function is available.
// Namespaces in |name| and |ns| are separated by kNameSep. // Namespaces in |name| and |ns| are separated by kNameSep.
bool IsRegistered(const std::string& ns, const std::string& name) const bool IsRegistered(const std::string& ns, const std::string& name) const
LOCKS_EXCLUDED(lock_) { ABSL_LOCKS_EXCLUDED(lock_) {
return IsRegistered(GetQualifiedName(ns, name)); return IsRegistered(GetQualifiedName(ns, name));
} }
@ -231,7 +231,7 @@ class FunctionRegistry {
// unregistered, though this will never happen with registrations made via // unregistered, though this will never happen with registrations made via
// MEDIAPIPE_REGISTER_FACTORY_FUNCTION. // MEDIAPIPE_REGISTER_FACTORY_FUNCTION.
std::unordered_set<std::string> GetRegisteredNames() const std::unordered_set<std::string> GetRegisteredNames() const
LOCKS_EXCLUDED(lock_) { ABSL_LOCKS_EXCLUDED(lock_) {
absl::ReaderMutexLock lock(&lock_); absl::ReaderMutexLock lock(&lock_);
std::unordered_set<std::string> names; std::unordered_set<std::string> names;
std::for_each(functions_.cbegin(), functions_.cend(), std::for_each(functions_.cbegin(), functions_.cend(),
@ -287,7 +287,7 @@ class FunctionRegistry {
private: private:
mutable absl::Mutex lock_; mutable absl::Mutex lock_;
std::unordered_map<std::string, Function> functions_ GUARDED_BY(lock_); std::unordered_map<std::string, Function> functions_ ABSL_GUARDED_BY(lock_);
// For names included in NamespaceWhitelist, strips the namespace. // For names included in NamespaceWhitelist, strips the namespace.
std::string GetAdjustedName(const std::string& name) { std::string GetAdjustedName(const std::string& name) {

View File

@ -93,8 +93,8 @@ class ThreadPool {
absl::Mutex mutex_; absl::Mutex mutex_;
absl::CondVar condition_; absl::CondVar condition_;
bool stopped_ GUARDED_BY(mutex_) = false; bool stopped_ ABSL_GUARDED_BY(mutex_) = false;
std::deque<std::function<void()>> tasks_ GUARDED_BY(mutex_); std::deque<std::function<void()>> tasks_ ABSL_GUARDED_BY(mutex_);
ThreadOptions thread_options_; ThreadOptions thread_options_;
}; };

View File

@ -87,6 +87,7 @@ def _encode_binary_proto_impl(ctx):
ctx.executable._proto_compiler.path, ctx.executable._proto_compiler.path,
"--encode=" + ctx.attr.message_type, "--encode=" + ctx.attr.message_type,
"--proto_path=" + ctx.genfiles_dir.path, "--proto_path=" + ctx.genfiles_dir.path,
"--proto_path=" + ctx.bin_dir.path,
"--proto_path=.", "--proto_path=.",
] + path_list + file_list + ] + path_list + file_list +
["<", textpb.path, ">", binarypb.path], ["<", textpb.path, ">", binarypb.path],
@ -136,6 +137,7 @@ def _generate_proto_descriptor_set_impl(ctx):
arguments = [ arguments = [
"--descriptor_set_out=%s" % descriptor.path, "--descriptor_set_out=%s" % descriptor.path,
"--proto_path=" + ctx.genfiles_dir.path, "--proto_path=" + ctx.genfiles_dir.path,
"--proto_path=" + ctx.bin_dir.path,
"--proto_path=.", "--proto_path=.",
] + ] +
[s.path for s in all_protos.to_list()], [s.path for s in all_protos.to_list()],

View File

@ -163,8 +163,8 @@ class OutputStreamPollerImpl : public GraphOutputStream {
private: private:
absl::Mutex mutex_; absl::Mutex mutex_;
absl::CondVar handler_condvar_ GUARDED_BY(mutex_); absl::CondVar handler_condvar_ ABSL_GUARDED_BY(mutex_);
bool graph_has_error_ GUARDED_BY(mutex_); bool graph_has_error_ ABSL_GUARDED_BY(mutex_);
}; };
} // namespace internal } // namespace internal

View File

@ -94,6 +94,7 @@ TEST(GraphValidationTest, InitializeGraphFromProtos) {
} }
node { node {
calculator: "PassThroughCalculator" calculator: "PassThroughCalculator"
name: "passthroughgraph__PassThroughCalculator"
input_stream: "stream_2" input_stream: "stream_2"
output_stream: "stream_3" output_stream: "stream_3"
} }
@ -201,7 +202,7 @@ TEST(GraphValidationTest, InitializeTemplateFromProtos) {
output_stream: "stream_2" output_stream: "stream_2"
} }
node { node {
name: "__sg0_stream_8" name: "passthroughgraph__stream_8"
calculator: "PassThroughCalculator" calculator: "PassThroughCalculator"
input_stream: "stream_2" input_stream: "stream_2"
output_stream: "stream_3" output_stream: "stream_3"
@ -263,6 +264,7 @@ TEST(GraphValidationTest, OptionalSubgraphStreams) {
} }
node { node {
calculator: "PassThroughCalculator" calculator: "PassThroughCalculator"
name: "passthroughgraph__PassThroughCalculator"
input_stream: "foo_bar" input_stream: "foo_bar"
output_stream: "foo_out" output_stream: "foo_out"
} }
@ -379,6 +381,7 @@ TEST(GraphValidationTest, OptionalInputNotProvidedForSubgraphCalculator) {
output_stream: "OUTPUT:foo_out" output_stream: "OUTPUT:foo_out"
node { node {
calculator: "OptionalSideInputTestCalculator" calculator: "OptionalSideInputTestCalculator"
name: "passthroughgraph__OptionalSideInputTestCalculator"
output_stream: "OUTPUT:foo_out" output_stream: "OUTPUT:foo_out"
} }
executor {} executor {}

View File

@ -233,7 +233,7 @@ Timestamp InputStreamManager::MinTimestampOrBound(bool* is_empty) const {
} }
Timestamp InputStreamManager::MinTimestampOrBoundHelper() const Timestamp InputStreamManager::MinTimestampOrBoundHelper() const
EXCLUSIVE_LOCKS_REQUIRED(stream_mutex_) { ABSL_EXCLUSIVE_LOCKS_REQUIRED(stream_mutex_) {
return queue_.empty() ? next_timestamp_bound_ : queue_.front().Timestamp(); return queue_.empty() ? next_timestamp_bound_ : queue_.front().Timestamp();
} }

View File

@ -73,7 +73,7 @@ class InputStreamManager {
// Reset the input stream for another run of the graph (i.e. another // Reset the input stream for another run of the graph (i.e. another
// image/video/audio). // image/video/audio).
void PrepareForRun() LOCKS_EXCLUDED(stream_mutex_); void PrepareForRun() ABSL_LOCKS_EXCLUDED(stream_mutex_);
// Adds a list of timestamped packets. Sets "notify" to true if the queue // Adds a list of timestamped packets. Sets "notify" to true if the queue
// becomes non-empty. Does nothing if the input stream is closed. // becomes non-empty. Does nothing if the input stream is closed.
@ -96,7 +96,7 @@ class InputStreamManager {
::mediapipe::Status MovePackets(std::list<Packet>* container, bool* notify); ::mediapipe::Status MovePackets(std::list<Packet>* container, bool* notify);
// Closes the input stream. This function can be called multiple times. // Closes the input stream. This function can be called multiple times.
void Close() LOCKS_EXCLUDED(stream_mutex_); void Close() ABSL_LOCKS_EXCLUDED(stream_mutex_);
// Sets the bound on the next timestamp to be added to the input stream. // Sets the bound on the next timestamp to be added to the input stream.
// Sets "notify" to true if the bound is advanced while the packet queue is // Sets "notify" to true if the bound is advanced while the packet queue is
@ -104,24 +104,24 @@ class InputStreamManager {
// DisableTimestamps() is called. Does nothing if the input stream is // DisableTimestamps() is called. Does nothing if the input stream is
// closed. // closed.
::mediapipe::Status SetNextTimestampBound(Timestamp bound, bool* notify) ::mediapipe::Status SetNextTimestampBound(Timestamp bound, bool* notify)
LOCKS_EXCLUDED(stream_mutex_); ABSL_LOCKS_EXCLUDED(stream_mutex_);
// Returns the smallest timestamp at which we might see an input in // Returns the smallest timestamp at which we might see an input in
// this input stream. This is the timestamp of the first item in the queue if // this input stream. This is the timestamp of the first item in the queue if
// the queue is non-empty, or the next timestamp bound if it is empty. // the queue is non-empty, or the next timestamp bound if it is empty.
// Sets is_empty to queue_.empty() if it is not nullptr. // Sets is_empty to queue_.empty() if it is not nullptr.
Timestamp MinTimestampOrBound(bool* is_empty) const Timestamp MinTimestampOrBound(bool* is_empty) const
LOCKS_EXCLUDED(stream_mutex_); ABSL_LOCKS_EXCLUDED(stream_mutex_);
// Turns off the use of packet timestamps. // Turns off the use of packet timestamps.
void DisableTimestamps(); void DisableTimestamps();
// Returns true iff the queue is empty. // Returns true iff the queue is empty.
bool IsEmpty() const LOCKS_EXCLUDED(stream_mutex_); bool IsEmpty() const ABSL_LOCKS_EXCLUDED(stream_mutex_);
// If the queue is not empty, returns the packet at the front of the queue. // If the queue is not empty, returns the packet at the front of the queue.
// Otherwise, returns an empty packet. // Otherwise, returns an empty packet.
Packet QueueHead() const LOCKS_EXCLUDED(stream_mutex_); Packet QueueHead() const ABSL_LOCKS_EXCLUDED(stream_mutex_);
// Advances time to timestamp. Pops and returns the packet in the queue // Advances time to timestamp. Pops and returns the packet in the queue
// with a matching timestamp, if it exists. Time can be advanced to any // with a matching timestamp, if it exists. Time can be advanced to any
@ -134,26 +134,26 @@ class InputStreamManager {
// Timestamp::Done() after the pop. // Timestamp::Done() after the pop.
Packet PopPacketAtTimestamp(Timestamp timestamp, int* num_packets_dropped, Packet PopPacketAtTimestamp(Timestamp timestamp, int* num_packets_dropped,
bool* stream_is_done) bool* stream_is_done)
LOCKS_EXCLUDED(stream_mutex_); ABSL_LOCKS_EXCLUDED(stream_mutex_);
// Pops and returns the packet at the head of the queue if the queue is // Pops and returns the packet at the head of the queue if the queue is
// non-empty. Sets "stream_is_done" if the next timestamp bound reaches // non-empty. Sets "stream_is_done" if the next timestamp bound reaches
// Timestamp::Done() after the pop. // Timestamp::Done() after the pop.
Packet PopQueueHead(bool* stream_is_done) LOCKS_EXCLUDED(stream_mutex_); Packet PopQueueHead(bool* stream_is_done) ABSL_LOCKS_EXCLUDED(stream_mutex_);
// Returns the number of packets in the queue. // Returns the number of packets in the queue.
int QueueSize() const LOCKS_EXCLUDED(stream_mutex_); int QueueSize() const ABSL_LOCKS_EXCLUDED(stream_mutex_);
// Returns true iff the queue is full. // Returns true iff the queue is full.
bool IsFull() const LOCKS_EXCLUDED(stream_mutex_); bool IsFull() const ABSL_LOCKS_EXCLUDED(stream_mutex_);
// Returns the max queue size. -1 indicates that there is no maximum. // Returns the max queue size. -1 indicates that there is no maximum.
int MaxQueueSize() const LOCKS_EXCLUDED(stream_mutex_); int MaxQueueSize() const ABSL_LOCKS_EXCLUDED(stream_mutex_);
// Sets the maximum queue size for the stream. Used to determine when the // Sets the maximum queue size for the stream. Used to determine when the
// callbacks for becomes_full and becomes_not_full should be invoked. A value // callbacks for becomes_full and becomes_not_full should be invoked. A value
// of -1 means that there is no maximum queue size. // of -1 means that there is no maximum queue size.
void SetMaxQueueSize(int max_queue_size) LOCKS_EXCLUDED(stream_mutex_); void SetMaxQueueSize(int max_queue_size) ABSL_LOCKS_EXCLUDED(stream_mutex_);
// If there are equal to or more than n packets in the queue, this function // If there are equal to or more than n packets in the queue, this function
// returns the min timestamp of among the latest n packets of the queue. If // returns the min timestamp of among the latest n packets of the queue. If
@ -161,12 +161,12 @@ class InputStreamManager {
// Timestamp::Unset(). // Timestamp::Unset().
// NOTE: This is a public API intended for FixedSizeInputStreamHandler only. // NOTE: This is a public API intended for FixedSizeInputStreamHandler only.
Timestamp GetMinTimestampAmongNLatest(int n) const Timestamp GetMinTimestampAmongNLatest(int n) const
LOCKS_EXCLUDED(stream_mutex_); ABSL_LOCKS_EXCLUDED(stream_mutex_);
// pop_front()s packets that are earlier than the given timestamp. // pop_front()s packets that are earlier than the given timestamp.
// NOTE: This is a public API intended for FixedSizeInputStreamHandler only. // NOTE: This is a public API intended for FixedSizeInputStreamHandler only.
void ErasePacketsEarlierThan(Timestamp timestamp) void ErasePacketsEarlierThan(Timestamp timestamp)
LOCKS_EXCLUDED(stream_mutex_); ABSL_LOCKS_EXCLUDED(stream_mutex_);
// If a maximum queue size is specified (!= -1), these callbacks that are // If a maximum queue size is specified (!= -1), these callbacks that are
// invoked when the input queue becomes full (>= max_queue_size_) or when it // invoked when the input queue becomes full (>= max_queue_size_) or when it
@ -184,24 +184,24 @@ class InputStreamManager {
template <typename Container> template <typename Container>
::mediapipe::Status AddOrMovePacketsInternal(Container container, ::mediapipe::Status AddOrMovePacketsInternal(Container container,
bool* notify) bool* notify)
LOCKS_EXCLUDED(stream_mutex_); ABSL_LOCKS_EXCLUDED(stream_mutex_);
// Returns true if the next timestamp bound reaches Timestamp::Done(). // Returns true if the next timestamp bound reaches Timestamp::Done().
bool IsDone() const EXCLUSIVE_LOCKS_REQUIRED(stream_mutex_); bool IsDone() const ABSL_EXCLUSIVE_LOCKS_REQUIRED(stream_mutex_);
// Returns the smallest timestamp at which this stream might see an input. // Returns the smallest timestamp at which this stream might see an input.
Timestamp MinTimestampOrBoundHelper() const; Timestamp MinTimestampOrBoundHelper() const;
mutable absl::Mutex stream_mutex_; mutable absl::Mutex stream_mutex_;
std::deque<Packet> queue_ GUARDED_BY(stream_mutex_); std::deque<Packet> queue_ ABSL_GUARDED_BY(stream_mutex_);
// The number of packets added to queue_. Used to verify a packet at // The number of packets added to queue_. Used to verify a packet at
// Timestamp::PostStream() is the only Packet in the stream. // Timestamp::PostStream() is the only Packet in the stream.
int64 num_packets_added_ GUARDED_BY(stream_mutex_); int64 num_packets_added_ ABSL_GUARDED_BY(stream_mutex_);
Timestamp next_timestamp_bound_ GUARDED_BY(stream_mutex_); Timestamp next_timestamp_bound_ ABSL_GUARDED_BY(stream_mutex_);
// The |timestamp| argument passed to the last SelectAtTimestamp() call. // The |timestamp| argument passed to the last SelectAtTimestamp() call.
// Ignored if enable_timestamps_ is false. // Ignored if enable_timestamps_ is false.
Timestamp last_select_timestamp_ GUARDED_BY(stream_mutex_); Timestamp last_select_timestamp_ ABSL_GUARDED_BY(stream_mutex_);
bool closed_ GUARDED_BY(stream_mutex_); bool closed_ ABSL_GUARDED_BY(stream_mutex_);
// True if packet timestamps are used. // True if packet timestamps are used.
bool enable_timestamps_ = true; bool enable_timestamps_ = true;
std::string name_; std::string name_;
@ -211,7 +211,7 @@ class InputStreamManager {
Packet header_; Packet header_;
// The maximum queue size for this stream if set. // The maximum queue size for this stream if set.
int max_queue_size_ GUARDED_BY(stream_mutex_) = -1; int max_queue_size_ ABSL_GUARDED_BY(stream_mutex_) = -1;
// Callback to notify the framework that we have hit the maximum queue size. // Callback to notify the framework that we have hit the maximum queue size.
QueueSizeCallback becomes_full_callback_; QueueSizeCallback becomes_full_callback_;

View File

@ -92,7 +92,7 @@ class OutputStreamHandler {
// resets data memebers. // resets data memebers.
void PrepareForRun( void PrepareForRun(
const std::function<void(::mediapipe::Status)>& error_callback) const std::function<void(::mediapipe::Status)>& error_callback)
LOCKS_EXCLUDED(timestamp_mutex_); ABSL_LOCKS_EXCLUDED(timestamp_mutex_);
// Marks the output streams as started and propagates any changes made in // Marks the output streams as started and propagates any changes made in
// Calculator::Open(). // Calculator::Open().
@ -106,10 +106,11 @@ class OutputStreamHandler {
// Propagates timestamp directly if there is no ongoing parallel invocation. // Propagates timestamp directly if there is no ongoing parallel invocation.
// Otherwise, updates task_timestamp_bound_. // Otherwise, updates task_timestamp_bound_.
void UpdateTaskTimestampBound(Timestamp timestamp) void UpdateTaskTimestampBound(Timestamp timestamp)
LOCKS_EXCLUDED(timestamp_mutex_); ABSL_LOCKS_EXCLUDED(timestamp_mutex_);
// Invoked after a call to Calculator::Process() function. // Invoked after a call to Calculator::Process() function.
void PostProcess(Timestamp input_timestamp) LOCKS_EXCLUDED(timestamp_mutex_); void PostProcess(Timestamp input_timestamp)
ABSL_LOCKS_EXCLUDED(timestamp_mutex_);
// Propagates the output shards and closes all managed output streams. // Propagates the output shards and closes all managed output streams.
void Close(OutputStreamShardSet* output_shards); void Close(OutputStreamShardSet* output_shards);
@ -133,7 +134,8 @@ class OutputStreamHandler {
OutputStreamShardSet* output_shards); OutputStreamShardSet* output_shards);
// The packets and timestamp propagation logic for parallel execution. // The packets and timestamp propagation logic for parallel execution.
virtual void PropagationLoop() EXCLUSIVE_LOCKS_REQUIRED(timestamp_mutex_) = 0; virtual void PropagationLoop()
ABSL_EXCLUSIVE_LOCKS_REQUIRED(timestamp_mutex_) = 0;
// Collection of all OutputStreamManager objects. // Collection of all OutputStreamManager objects.
OutputStreamManagerSet output_stream_managers_; OutputStreamManagerSet output_stream_managers_;
@ -144,10 +146,11 @@ class OutputStreamHandler {
absl::Mutex timestamp_mutex_; absl::Mutex timestamp_mutex_;
// A set of the completed input timestamps in ascending order. // A set of the completed input timestamps in ascending order.
std::set<Timestamp> completed_input_timestamps_ GUARDED_BY(timestamp_mutex_); std::set<Timestamp> completed_input_timestamps_
ABSL_GUARDED_BY(timestamp_mutex_);
// The current minimum timestamp for which a new packet could possibly arrive. // The current minimum timestamp for which a new packet could possibly arrive.
// TODO: Rename the variable to be more descriptive. // TODO: Rename the variable to be more descriptive.
Timestamp task_timestamp_bound_ GUARDED_BY(timestamp_mutex_); Timestamp task_timestamp_bound_ ABSL_GUARDED_BY(timestamp_mutex_);
// PropagateionState indicates the current state of the propagation process. // PropagateionState indicates the current state of the propagation process.
// There are eight possible transitions: // There are eight possible transitions:
@ -187,7 +190,7 @@ class OutputStreamHandler {
kPropagatingBound = 2, // kPropagatingBound = 2, //
kPropagationPending = 3 kPropagationPending = 3
}; };
PropagationState propagation_state_ GUARDED_BY(timestamp_mutex_) = kIdle; PropagationState propagation_state_ ABSL_GUARDED_BY(timestamp_mutex_) = kIdle;
}; };
using OutputStreamHandlerRegistry = GlobalFactoryRegistry< using OutputStreamHandlerRegistry = GlobalFactoryRegistry<

View File

@ -118,8 +118,8 @@ class OutputStreamManager {
std::vector<Mirror> mirrors_; std::vector<Mirror> mirrors_;
mutable absl::Mutex stream_mutex_; mutable absl::Mutex stream_mutex_;
Timestamp next_timestamp_bound_ GUARDED_BY(stream_mutex_); Timestamp next_timestamp_bound_ ABSL_GUARDED_BY(stream_mutex_);
bool closed_ GUARDED_BY(stream_mutex_); bool closed_ ABSL_GUARDED_BY(stream_mutex_);
}; };
} // namespace mediapipe } // namespace mediapipe

View File

@ -135,15 +135,15 @@ class GeneratorScheduler {
void GenerateAndScheduleNext(int generator_index, void GenerateAndScheduleNext(int generator_index,
std::map<std::string, Packet>* side_packets, std::map<std::string, Packet>* side_packets,
std::unique_ptr<PacketSet> input_side_packet_set) std::unique_ptr<PacketSet> input_side_packet_set)
LOCKS_EXCLUDED(mutex_); ABSL_LOCKS_EXCLUDED(mutex_);
// Iterate through all generators in the config, scheduling any that // Iterate through all generators in the config, scheduling any that
// are runnable (and haven't been scheduled yet). // are runnable (and haven't been scheduled yet).
void ScheduleAllRunnableGenerators( void ScheduleAllRunnableGenerators(
std::map<std::string, Packet>* side_packets) LOCKS_EXCLUDED(mutex_); std::map<std::string, Packet>* side_packets) ABSL_LOCKS_EXCLUDED(mutex_);
// Waits until there are no pending tasks. // Waits until there are no pending tasks.
void WaitUntilIdle() LOCKS_EXCLUDED(mutex_); void WaitUntilIdle() ABSL_LOCKS_EXCLUDED(mutex_);
// Stores the indexes of the packet generators that were not scheduled (or // Stores the indexes of the packet generators that were not scheduled (or
// rather, not executed) in non_scheduled_generators. Returns the combined // rather, not executed) in non_scheduled_generators. Returns the combined
@ -158,26 +158,26 @@ class GeneratorScheduler {
// Run all the application thread tasks (which are kept track of in // Run all the application thread tasks (which are kept track of in
// app_thread_tasks_). // app_thread_tasks_).
void RunApplicationThreadTasks() LOCKS_EXCLUDED(app_thread_mutex_); void RunApplicationThreadTasks() ABSL_LOCKS_EXCLUDED(app_thread_mutex_);
const ValidatedGraphConfig* const validated_graph_; const ValidatedGraphConfig* const validated_graph_;
::mediapipe::Executor* executor_; ::mediapipe::Executor* executor_;
mutable absl::Mutex mutex_; mutable absl::Mutex mutex_;
// The number of pending tasks. // The number of pending tasks.
int num_tasks_ GUARDED_BY(mutex_) = 0; int num_tasks_ ABSL_GUARDED_BY(mutex_) = 0;
// This condition variable is signaled when num_tasks_ becomes 0. // This condition variable is signaled when num_tasks_ becomes 0.
absl::CondVar idle_condvar_; absl::CondVar idle_condvar_;
// Accumulates the error statuses while running the packet generators. // Accumulates the error statuses while running the packet generators.
std::vector<::mediapipe::Status> statuses_ GUARDED_BY(mutex_); std::vector<::mediapipe::Status> statuses_ ABSL_GUARDED_BY(mutex_);
// scheduled_generators_[i] is true if the packet generator with index i was // scheduled_generators_[i] is true if the packet generator with index i was
// scheduled (or rather, executed). // scheduled (or rather, executed).
std::vector<bool> scheduled_generators_ GUARDED_BY(mutex_); std::vector<bool> scheduled_generators_ ABSL_GUARDED_BY(mutex_);
absl::Mutex app_thread_mutex_; absl::Mutex app_thread_mutex_;
// Tasks to be executed on the application thread. // Tasks to be executed on the application thread.
std::deque<std::function<void()>> app_thread_tasks_ std::deque<std::function<void()>> app_thread_tasks_
GUARDED_BY(app_thread_mutex_); ABSL_GUARDED_BY(app_thread_mutex_);
std::unique_ptr<internal::DelegatingExecutor> delegating_executor_; std::unique_ptr<internal::DelegatingExecutor> delegating_executor_;
}; };

View File

@ -116,6 +116,7 @@ cc_library(
"//mediapipe/framework/port:logging", "//mediapipe/framework/port:logging",
"//mediapipe/framework/port:ret_check", "//mediapipe/framework/port:ret_check",
"//mediapipe/framework/port:status", "//mediapipe/framework/port:status",
"//mediapipe/framework/tool:name_util",
"//mediapipe/framework/tool:tag_map", "//mediapipe/framework/tool:tag_map",
"//mediapipe/framework/tool:validate_name", "//mediapipe/framework/tool:validate_name",
"@com_google_absl//absl/memory", "@com_google_absl//absl/memory",

View File

@ -27,6 +27,7 @@
#include "mediapipe/framework/port/ret_check.h" #include "mediapipe/framework/port/ret_check.h"
#include "mediapipe/framework/port/status.h" #include "mediapipe/framework/port/status.h"
#include "mediapipe/framework/profiler/profiler_resource_util.h" #include "mediapipe/framework/profiler/profiler_resource_util.h"
#include "mediapipe/framework/tool/name_util.h"
#include "mediapipe/framework/tool/tag_map.h" #include "mediapipe/framework/tool/tag_map.h"
#include "mediapipe/framework/tool/validate_name.h" #include "mediapipe/framework/tool/validate_name.h"
@ -133,7 +134,7 @@ void GraphProfiler::Initialize(
for (int node_id = 0; for (int node_id = 0;
node_id < validated_graph_config.CalculatorInfos().size(); ++node_id) { node_id < validated_graph_config.CalculatorInfos().size(); ++node_id) {
std::string node_name = std::string node_name =
CanonicalNodeName(validated_graph_config.Config(), node_id); tool::CanonicalNodeName(validated_graph_config.Config(), node_id);
CalculatorProfile profile; CalculatorProfile profile;
profile.set_name(node_name); profile.set_name(node_name);
InitializeTimeHistogram(interval_size_usec, num_intervals, InitializeTimeHistogram(interval_size_usec, num_intervals,

View File

@ -122,15 +122,15 @@ class GraphProfiler : public std::enable_shared_from_this<ProfilingContext> {
// the profiler disables itself and returns an empty stub if Initialize() is // the profiler disables itself and returns an empty stub if Initialize() is
// called more than once. // called more than once.
void Initialize(const ValidatedGraphConfig& validated_graph_config) void Initialize(const ValidatedGraphConfig& validated_graph_config)
LOCKS_EXCLUDED(profiler_mutex_); ABSL_LOCKS_EXCLUDED(profiler_mutex_);
// Sets the profiler clock. // Sets the profiler clock.
void SetClock(const std::shared_ptr<mediapipe::Clock>& clock) void SetClock(const std::shared_ptr<mediapipe::Clock>& clock)
LOCKS_EXCLUDED(profiler_mutex_); ABSL_LOCKS_EXCLUDED(profiler_mutex_);
// Gets the profiler clock. // Gets the profiler clock.
const std::shared_ptr<mediapipe::Clock> GetClock() const const std::shared_ptr<mediapipe::Clock> GetClock() const
LOCKS_EXCLUDED(profiler_mutex_); ABSL_LOCKS_EXCLUDED(profiler_mutex_);
// Pauses profiling. No-op if already paused. // Pauses profiling. No-op if already paused.
void Pause(); void Pause();
@ -138,7 +138,7 @@ class GraphProfiler : public std::enable_shared_from_this<ProfilingContext> {
void Resume(); void Resume();
// Resets cumulative profiling data. This only resets the information about // Resets cumulative profiling data. This only resets the information about
// Process() and does NOT affect information for Open() and Close() methods. // Process() and does NOT affect information for Open() and Close() methods.
void Reset() LOCKS_EXCLUDED(profiler_mutex_); void Reset() ABSL_LOCKS_EXCLUDED(profiler_mutex_);
// Begins profiling for a single graph run. // Begins profiling for a single graph run.
::mediapipe::Status Start(::mediapipe::Executor* executor); ::mediapipe::Status Start(::mediapipe::Executor* executor);
// Ends profiling for a single graph run. // Ends profiling for a single graph run.
@ -150,8 +150,8 @@ class GraphProfiler : public std::enable_shared_from_this<ProfilingContext> {
// Collects the runtime profile for Open(), Process(), and Close() of each // Collects the runtime profile for Open(), Process(), and Close() of each
// calculator in the graph. May be called at any time after the graph has been // calculator in the graph. May be called at any time after the graph has been
// initialized. // initialized.
::mediapipe::Status GetCalculatorProfiles( ::mediapipe::Status GetCalculatorProfiles(std::vector<CalculatorProfile>*)
std::vector<CalculatorProfile>*) const LOCKS_EXCLUDED(profiler_mutex_); const ABSL_LOCKS_EXCLUDED(profiler_mutex_);
// Writes recent profiling and tracing data to a file specified in the // Writes recent profiling and tracing data to a file specified in the
// ProfilerConfig. Includes events since the previous call to WriteProfile. // ProfilerConfig. Includes events since the previous call to WriteProfile.
@ -234,7 +234,7 @@ class GraphProfiler : public std::enable_shared_from_this<ProfilingContext> {
// It is the responsibility of the caller to make sure the |timestamp_usec| // It is the responsibility of the caller to make sure the |timestamp_usec|
// is valid for profiling. // is valid for profiling.
void AddPacketInfo(const TraceEvent& packet_info) void AddPacketInfo(const TraceEvent& packet_info)
LOCKS_EXCLUDED(profiler_mutex_); ABSL_LOCKS_EXCLUDED(profiler_mutex_);
static void InitializeTimeHistogram(int64 interval_size_usec, static void InitializeTimeHistogram(int64 interval_size_usec,
int64 num_intervals, int64 num_intervals,
TimeHistogram* histogram); TimeHistogram* histogram);
@ -273,10 +273,10 @@ class GraphProfiler : public std::enable_shared_from_this<ProfilingContext> {
void SetOpenRuntime(const CalculatorContext& calculator_context, void SetOpenRuntime(const CalculatorContext& calculator_context,
int64 start_time_usec, int64 end_time_usec) int64 start_time_usec, int64 end_time_usec)
LOCKS_EXCLUDED(profiler_mutex_); ABSL_LOCKS_EXCLUDED(profiler_mutex_);
void SetCloseRuntime(const CalculatorContext& calculator_context, void SetCloseRuntime(const CalculatorContext& calculator_context,
int64 start_time_usec, int64 end_time_usec) int64 start_time_usec, int64 end_time_usec)
LOCKS_EXCLUDED(profiler_mutex_); ABSL_LOCKS_EXCLUDED(profiler_mutex_);
// Updates the input streams profiles for the calculator and returns the // Updates the input streams profiles for the calculator and returns the
// minimum |source_process_start_usec| of all input packets, excluding empty // minimum |source_process_start_usec| of all input packets, excluding empty
@ -289,7 +289,7 @@ class GraphProfiler : public std::enable_shared_from_this<ProfilingContext> {
// Requires ReaderLock for is_profiling_. // Requires ReaderLock for is_profiling_.
void AddProcessSample(const CalculatorContext& calculator_context, void AddProcessSample(const CalculatorContext& calculator_context,
int64 start_time_usec, int64 end_time_usec) int64 start_time_usec, int64 end_time_usec)
LOCKS_EXCLUDED(profiler_mutex_); ABSL_LOCKS_EXCLUDED(profiler_mutex_);
// Helper method to get trace_log_path. If the trace_log_path is empty and // Helper method to get trace_log_path. If the trace_log_path is empty and
// tracing is enabled, this function returns a default platform dependent // tracing is enabled, this function returns a default platform dependent

View File

@ -1284,5 +1284,33 @@ TEST_F(GraphTracerE2ETest, GpuTracing) {
EXPECT_NE(nullptr, graph_.profiler()->CreateGlProfilingHelper()); EXPECT_NE(nullptr, graph_.profiler()->CreateGlProfilingHelper());
} }
// This test shows that ~CalculatorGraph() can complete successfully, even when
// the periodic profiler output is enabled. If periodic profiler output is not
// stopped in ~CalculatorGraph(), it will deadlock at ~Executor().
TEST_F(GraphTracerE2ETest, DestructGraph) {
std::string log_path = absl::StrCat(getenv("TEST_TMPDIR"), "/log_file_");
SetUpPassThroughGraph();
graph_config_.mutable_profiler_config()->set_trace_enabled(true);
graph_config_.mutable_profiler_config()->set_trace_log_path(log_path);
graph_config_.set_num_threads(4);
// Callbacks to control the LambdaCalculator.
ProcessFunction wait_0 = [&](const InputStreamShardSet& inputs,
OutputStreamShardSet* outputs) {
return PassThrough(inputs, outputs);
};
{
CalculatorGraph graph;
// Start the graph with the callback.
MP_ASSERT_OK(graph.Initialize(graph_config_,
{
{"callback_0", Adopt(new auto(wait_0))},
}));
MP_ASSERT_OK(graph.StartRun({}));
// Destroy the graph immediately.
}
}
} // namespace } // namespace
} // namespace mediapipe } // namespace mediapipe

View File

@ -47,7 +47,7 @@ class ShardedMap {
: ShardedMap(capacity, capacity / 10 + 1) {} : ShardedMap(capacity, capacity / 10 + 1) {}
// Returns the iterator to the entry for a key. // Returns the iterator to the entry for a key.
inline iterator find(const Key& key) NO_THREAD_SAFETY_ANALYSIS { inline iterator find(const Key& key) ABSL_NO_THREAD_SAFETY_ANALYSIS {
size_t shard = Index(key); size_t shard = Index(key);
mutexes_[shard].Lock(); mutexes_[shard].Lock();
typename Map::iterator iter = maps_[shard].find(key); typename Map::iterator iter = maps_[shard].find(key);
@ -67,7 +67,7 @@ class ShardedMap {
// Adds an entry to the map and returns the iterator to it. // Adds an entry to the map and returns the iterator to it.
inline std::pair<iterator, bool> insert(const value_type& val) inline std::pair<iterator, bool> insert(const value_type& val)
NO_THREAD_SAFETY_ANALYSIS { ABSL_NO_THREAD_SAFETY_ANALYSIS {
size_t shard = Index(val.first); size_t shard = Index(val.first);
mutexes_[shard].Lock(); mutexes_[shard].Lock();
std::pair<typename Map::iterator, bool> p = maps_[shard].insert(val); std::pair<typename Map::iterator, bool> p = maps_[shard].insert(val);
@ -91,7 +91,7 @@ class ShardedMap {
inline size_t size() const { return size_; } inline size_t size() const { return size_; }
// Returns the iterator to the first element. // Returns the iterator to the first element.
inline iterator begin() NO_THREAD_SAFETY_ANALYSIS { inline iterator begin() ABSL_NO_THREAD_SAFETY_ANALYSIS {
mutexes_[0].Lock(); mutexes_[0].Lock();
iterator result{0, maps_[0].begin(), this}; iterator result{0, maps_[0].begin(), this};
result.NextEntryShard(); result.NextEntryShard();
@ -153,14 +153,14 @@ class ShardedMap {
Iterator(size_t shard, map_iterator iter, ShardedMapPtr map) Iterator(size_t shard, map_iterator iter, ShardedMapPtr map)
: shard_(shard), iter_(iter), map_(map) {} : shard_(shard), iter_(iter), map_(map) {}
// Releases all resources. // Releases all resources.
inline void Clear() NO_THREAD_SAFETY_ANALYSIS { inline void Clear() ABSL_NO_THREAD_SAFETY_ANALYSIS {
if (map_ && iter_ != map_->maps_.back().end()) { if (map_ && iter_ != map_->maps_.back().end()) {
map_->mutexes_[shard_].Unlock(); map_->mutexes_[shard_].Unlock();
} }
map_ = nullptr; map_ = nullptr;
} }
// Moves to the shard of the next entry. // Moves to the shard of the next entry.
void NextEntryShard() NO_THREAD_SAFETY_ANALYSIS { void NextEntryShard() ABSL_NO_THREAD_SAFETY_ANALYSIS {
size_t last = map_->maps_.size() - 1; size_t last = map_->maps_.size() - 1;
while (iter_ == map_->maps_[shard_].end() && shard_ < last) { while (iter_ == map_->maps_[shard_].end() && shard_ < last) {
map_->mutexes_[shard_].Unlock(); map_->mutexes_[shard_].Unlock();

View File

@ -238,7 +238,7 @@ void Scheduler::WaitUntilGraphInputStreamUnthrottled(
} }
secondary_mutex->Unlock(); secondary_mutex->Unlock();
ApplicationThreadAwait( ApplicationThreadAwait(
[this, seq_num]() EXCLUSIVE_LOCKS_REQUIRED(state_mutex_) { [this, seq_num]() ABSL_EXCLUSIVE_LOCKS_REQUIRED(state_mutex_) {
return (unthrottle_seq_num_ != seq_num) || state_ == STATE_TERMINATED; return (unthrottle_seq_num_ != seq_num) || state_ == STATE_TERMINATED;
}); });
secondary_mutex->Lock(); secondary_mutex->Lock();
@ -255,7 +255,7 @@ void Scheduler::EmittedObservedOutput() {
::mediapipe::Status Scheduler::WaitForObservedOutput() { ::mediapipe::Status Scheduler::WaitForObservedOutput() {
bool observed = false; bool observed = false;
ApplicationThreadAwait( ApplicationThreadAwait(
[this, &observed]() EXCLUSIVE_LOCKS_REQUIRED(state_mutex_) { [this, &observed]() ABSL_EXCLUSIVE_LOCKS_REQUIRED(state_mutex_) {
observed = observed_output_signal_; observed = observed_output_signal_;
observed_output_signal_ = false; observed_output_signal_ = false;
waiting_for_observed_output_ = !observed && state_ != STATE_TERMINATED; waiting_for_observed_output_ = !observed && state_ != STATE_TERMINATED;
@ -281,7 +281,7 @@ void Scheduler::EmittedObservedOutput() {
::mediapipe::Status Scheduler::WaitUntilDone() { ::mediapipe::Status Scheduler::WaitUntilDone() {
RET_CHECK_NE(state_, STATE_NOT_STARTED); RET_CHECK_NE(state_, STATE_NOT_STARTED);
ApplicationThreadAwait([this]() EXCLUSIVE_LOCKS_REQUIRED(state_mutex_) { ApplicationThreadAwait([this]() ABSL_EXCLUSIVE_LOCKS_REQUIRED(state_mutex_) {
return state_ == STATE_TERMINATED; return state_ == STATE_TERMINATED;
}); });
return ::mediapipe::OkStatus(); return ::mediapipe::OkStatus();

View File

@ -70,13 +70,13 @@ class Scheduler {
// have been closed, and no more calculators can be run). // have been closed, and no more calculators can be run).
// This function can be called only after Start(). // This function can be called only after Start().
// Runs application thread tasks while waiting. // Runs application thread tasks while waiting.
::mediapipe::Status WaitUntilDone() LOCKS_EXCLUDED(state_mutex_); ::mediapipe::Status WaitUntilDone() ABSL_LOCKS_EXCLUDED(state_mutex_);
// Wait until the running graph is in the idle mode, which is when nothing can // Wait until the running graph is in the idle mode, which is when nothing can
// be scheduled and nothing is running in the worker threads. This function // be scheduled and nothing is running in the worker threads. This function
// can be called only after Start(). // can be called only after Start().
// Runs application thread tasks while waiting. // Runs application thread tasks while waiting.
::mediapipe::Status WaitUntilIdle() LOCKS_EXCLUDED(state_mutex_); ::mediapipe::Status WaitUntilIdle() ABSL_LOCKS_EXCLUDED(state_mutex_);
// Wait until any graph input stream has been unthrottled. // Wait until any graph input stream has been unthrottled.
// This is meant to be used by CalculatorGraph::AddPacketToInputStream, which // This is meant to be used by CalculatorGraph::AddPacketToInputStream, which
@ -86,14 +86,15 @@ class Scheduler {
// This function can be called by multiple threads concurrently. // This function can be called by multiple threads concurrently.
// Runs application thread tasks while waiting. // Runs application thread tasks while waiting.
void WaitUntilGraphInputStreamUnthrottled(absl::Mutex* secondary_mutex) void WaitUntilGraphInputStreamUnthrottled(absl::Mutex* secondary_mutex)
LOCKS_EXCLUDED(state_mutex_) EXCLUSIVE_LOCKS_REQUIRED(secondary_mutex); ABSL_LOCKS_EXCLUDED(state_mutex_)
ABSL_EXCLUSIVE_LOCKS_REQUIRED(secondary_mutex);
// Wait until any observed output emits a packet. Like a semaphore, // Wait until any observed output emits a packet. Like a semaphore,
// this function returns immediately if an observed packet has already been // this function returns immediately if an observed packet has already been
// emitted since the previous call. This relies on the fact that the calls are // emitted since the previous call. This relies on the fact that the calls are
// in sequence. Runs application thread tasks while waiting. // in sequence. Runs application thread tasks while waiting.
// Returns ::mediapipe::OutOfRangeError if the graph terminated. // Returns ::mediapipe::OutOfRangeError if the graph terminated.
::mediapipe::Status WaitForObservedOutput() LOCKS_EXCLUDED(state_mutex_); ::mediapipe::Status WaitForObservedOutput() ABSL_LOCKS_EXCLUDED(state_mutex_);
// Callback that is invoked by a node when it wants to be scheduled. // Callback that is invoked by a node when it wants to be scheduled.
// If the node is throttled, the call is ignored. // If the node is throttled, the call is ignored.
@ -118,27 +119,28 @@ class Scheduler {
void AddUnopenedSourceNode(CalculatorNode* node); void AddUnopenedSourceNode(CalculatorNode* node);
// Adds |node| to |sources_queue_|. // Adds |node| to |sources_queue_|.
void AddNodeToSourcesQueue(CalculatorNode* node) LOCKS_EXCLUDED(state_mutex_); void AddNodeToSourcesQueue(CalculatorNode* node)
ABSL_LOCKS_EXCLUDED(state_mutex_);
// Assigns node to a scheduler queue. // Assigns node to a scheduler queue.
void AssignNodeToSchedulerQueue(CalculatorNode* node); void AssignNodeToSchedulerQueue(CalculatorNode* node);
// Pauses the scheduler. Does nothing if Cancel has been called. // Pauses the scheduler. Does nothing if Cancel has been called.
void Pause() LOCKS_EXCLUDED(state_mutex_); void Pause() ABSL_LOCKS_EXCLUDED(state_mutex_);
// Resumes the scheduler. // Resumes the scheduler.
void Resume() LOCKS_EXCLUDED(state_mutex_); void Resume() ABSL_LOCKS_EXCLUDED(state_mutex_);
// Aborts the scheduler if the graph is started but is not terminated; no-op // Aborts the scheduler if the graph is started but is not terminated; no-op
// otherwise. For the graph to properly be cancelled, graph_->HasError() // otherwise. For the graph to properly be cancelled, graph_->HasError()
// must also return true. // must also return true.
void Cancel() LOCKS_EXCLUDED(state_mutex_); void Cancel() ABSL_LOCKS_EXCLUDED(state_mutex_);
// Returns true if scheduler is paused. // Returns true if scheduler is paused.
bool IsPaused() LOCKS_EXCLUDED(state_mutex_); bool IsPaused() ABSL_LOCKS_EXCLUDED(state_mutex_);
// Returns true if scheduler is terminated. // Returns true if scheduler is terminated.
bool IsTerminated() LOCKS_EXCLUDED(state_mutex_); bool IsTerminated() ABSL_LOCKS_EXCLUDED(state_mutex_);
// Cleanup any remaining state after the run. // Cleanup any remaining state after the run.
void CleanupAfterRun(); void CleanupAfterRun();
@ -148,11 +150,11 @@ class Scheduler {
// Notifies the scheduler that a packet was added to a graph input stream. // Notifies the scheduler that a packet was added to a graph input stream.
// The scheduler needs to check whether it is still deadlocked, and // The scheduler needs to check whether it is still deadlocked, and
// unthrottle again if so. // unthrottle again if so.
void AddedPacketToGraphInputStream() LOCKS_EXCLUDED(state_mutex_); void AddedPacketToGraphInputStream() ABSL_LOCKS_EXCLUDED(state_mutex_);
void ThrottledGraphInputStream() LOCKS_EXCLUDED(state_mutex_); void ThrottledGraphInputStream() ABSL_LOCKS_EXCLUDED(state_mutex_);
void UnthrottledGraphInputStream() LOCKS_EXCLUDED(state_mutex_); void UnthrottledGraphInputStream() ABSL_LOCKS_EXCLUDED(state_mutex_);
void EmittedObservedOutput() LOCKS_EXCLUDED(state_mutex_); void EmittedObservedOutput() ABSL_LOCKS_EXCLUDED(state_mutex_);
// Closes all source nodes at the next scheduling opportunity. // Closes all source nodes at the next scheduling opportunity.
void CloseAllSourceNodes(); void CloseAllSourceNodes();
@ -212,7 +214,7 @@ class Scheduler {
// Returns true if nothing can be scheduled and no tasks are running or // Returns true if nothing can be scheduled and no tasks are running or
// scheduled to run on the Executor. // scheduled to run on the Executor.
bool IsIdle() EXCLUSIVE_LOCKS_REQUIRED(state_mutex_); bool IsIdle() ABSL_EXCLUSIVE_LOCKS_REQUIRED(state_mutex_);
// Clean up active_sources_ by removing closed sources. If all the active // Clean up active_sources_ by removing closed sources. If all the active
// sources are closed, this will leave active_sources_ empty. If not, some // sources are closed, this will leave active_sources_ empty. If not, some
@ -222,7 +224,8 @@ class Scheduler {
// Adds the next layer of sources to the scheduler queue if the previous layer // Adds the next layer of sources to the scheduler queue if the previous layer
// has finished running. // has finished running.
// Returns true if it scheduled any sources. // Returns true if it scheduled any sources.
bool TryToScheduleNextSourceLayer() EXCLUSIVE_LOCKS_REQUIRED(state_mutex_); bool TryToScheduleNextSourceLayer()
ABSL_EXCLUSIVE_LOCKS_REQUIRED(state_mutex_);
// Takes care of three different operations, as needed: // Takes care of three different operations, as needed:
// - activating sources; // - activating sources;
@ -230,10 +233,10 @@ class Scheduler {
// - terminating the scheduler. // - terminating the scheduler.
// Thread-safe and reentrant. // Thread-safe and reentrant.
// TODO: analyze call sites, split it up further. // TODO: analyze call sites, split it up further.
void HandleIdle() EXCLUSIVE_LOCKS_REQUIRED(state_mutex_); void HandleIdle() ABSL_EXCLUSIVE_LOCKS_REQUIRED(state_mutex_);
// Terminates the scheduler. Should only be called by HandleIdle. // Terminates the scheduler. Should only be called by HandleIdle.
void Quit() EXCLUSIVE_LOCKS_REQUIRED(state_mutex_); void Quit() ABSL_EXCLUSIVE_LOCKS_REQUIRED(state_mutex_);
// Helper for the various Wait methods. Waits for the given condition, // Helper for the various Wait methods. Waits for the given condition,
// running application thread tasks in the meantime. // running application thread tasks in the meantime.
@ -257,7 +260,7 @@ class Scheduler {
// Priority queue of source nodes ordered by layer and then source process // Priority queue of source nodes ordered by layer and then source process
// order. This stores the set of sources that are yet to be run. // order. This stores the set of sources that are yet to be run.
std::priority_queue<SchedulerQueue::Item> sources_queue_ std::priority_queue<SchedulerQueue::Item> sources_queue_
GUARDED_BY(state_mutex_); ABSL_GUARDED_BY(state_mutex_);
// Source nodes with the smallest source layer are at the beginning of // Source nodes with the smallest source layer are at the beginning of
// unopened_sources_. Before the scheduler is started, all source nodes are // unopened_sources_. Before the scheduler is started, all source nodes are
@ -276,7 +279,7 @@ class Scheduler {
// These correspond to the Wait* methods in this class. // These correspond to the Wait* methods in this class.
// Not all state changes need to signal this, only those that enter one of // Not all state changes need to signal this, only those that enter one of
// the waitable states. // the waitable states.
absl::CondVar state_cond_var_ GUARDED_BY(state_mutex_); absl::CondVar state_cond_var_ ABSL_GUARDED_BY(state_mutex_);
// Number of queues which are not idle. // Number of queues which are not idle.
// Note: this indicates two slightly different things: // Note: this indicates two slightly different things:
@ -288,17 +291,18 @@ class Scheduler {
// This is ok, because it happens within a single critical section, which is // This is ok, because it happens within a single critical section, which is
// guarded by state_mutex_. If we wanted to split this critical section, we // guarded by state_mutex_. If we wanted to split this critical section, we
// would have to separate a and b into two variables. // would have to separate a and b into two variables.
int non_idle_queue_count_ GUARDED_BY(state_mutex_) = 0; int non_idle_queue_count_ ABSL_GUARDED_BY(state_mutex_) = 0;
// Tasks to be executed on the application thread. // Tasks to be executed on the application thread.
std::deque<std::function<void()>> app_thread_tasks_ GUARDED_BY(state_mutex_); std::deque<std::function<void()>> app_thread_tasks_
ABSL_GUARDED_BY(state_mutex_);
// Used by HandleIdle to avoid multiple concurrent executions. // Used by HandleIdle to avoid multiple concurrent executions.
// We cannot simply hold a mutex throughout it, for two reasons: // We cannot simply hold a mutex throughout it, for two reasons:
// - We need it to be reentrant, which Mutex does not support. // - We need it to be reentrant, which Mutex does not support.
// - We want simultaneous calls to return immediately instead of waiting, // - We want simultaneous calls to return immediately instead of waiting,
// and Mutex's TryLock is not guaranteed to work. // and Mutex's TryLock is not guaranteed to work.
bool handling_idle_ GUARDED_BY(state_mutex_) = false; bool handling_idle_ ABSL_GUARDED_BY(state_mutex_) = false;
// Mutex for the scheduler state and related things. // Mutex for the scheduler state and related things.
// Note: state_ is declared as atomic so that its getter methods don't need // Note: state_ is declared as atomic so that its getter methods don't need
@ -309,19 +313,19 @@ class Scheduler {
std::atomic<State> state_ = ATOMIC_VAR_INIT(STATE_NOT_STARTED); std::atomic<State> state_ = ATOMIC_VAR_INIT(STATE_NOT_STARTED);
// True if all graph input streams are closed. // True if all graph input streams are closed.
bool graph_input_streams_closed_ GUARDED_BY(state_mutex_) = false; bool graph_input_streams_closed_ ABSL_GUARDED_BY(state_mutex_) = false;
// Number of throttled graph input streams. // Number of throttled graph input streams.
int throttled_graph_input_stream_count_ GUARDED_BY(state_mutex_) = 0; int throttled_graph_input_stream_count_ ABSL_GUARDED_BY(state_mutex_) = 0;
// Used to stop WaitUntilGraphInputStreamUnthrottled. // Used to stop WaitUntilGraphInputStreamUnthrottled.
int unthrottle_seq_num_ GUARDED_BY(state_mutex_) = 0; int unthrottle_seq_num_ ABSL_GUARDED_BY(state_mutex_) = 0;
// Used to stop WaitForObservedOutput. // Used to stop WaitForObservedOutput.
bool observed_output_signal_ GUARDED_BY(state_mutex_) = false; bool observed_output_signal_ ABSL_GUARDED_BY(state_mutex_) = false;
// True if an application thread is waiting in WaitForObservedOutput. // True if an application thread is waiting in WaitForObservedOutput.
bool waiting_for_observed_output_ GUARDED_BY(state_mutex_) = false; bool waiting_for_observed_output_ ABSL_GUARDED_BY(state_mutex_) = false;
}; };
} // namespace internal } // namespace internal

View File

@ -105,45 +105,45 @@ class SchedulerQueue : public TaskQueue {
// NOTE: After calling SetRunning(true), the caller must call // NOTE: After calling SetRunning(true), the caller must call
// SubmitWaitingTasksToExecutor since tasks may have been added while the // SubmitWaitingTasksToExecutor since tasks may have been added while the
// queue was not running. // queue was not running.
void SetRunning(bool running) LOCKS_EXCLUDED(mutex_); void SetRunning(bool running) ABSL_LOCKS_EXCLUDED(mutex_);
// Gets the number of tasks that need to be submitted to the executor, and // Gets the number of tasks that need to be submitted to the executor, and
// updates num_pending_tasks_. If this method is called and returns a // updates num_pending_tasks_. If this method is called and returns a
// non-zero value, the executor's AddTask method *must* be called for each // non-zero value, the executor's AddTask method *must* be called for each
// task returned, but it can be called without holding the lock. // task returned, but it can be called without holding the lock.
int GetTasksToSubmitToExecutor() EXCLUSIVE_LOCKS_REQUIRED(mutex_); int GetTasksToSubmitToExecutor() ABSL_EXCLUSIVE_LOCKS_REQUIRED(mutex_);
// Submits tasks that are waiting (e.g. that were added while the queue was // Submits tasks that are waiting (e.g. that were added while the queue was
// not running) if the queue is running. The caller must not hold any mutex. // not running) if the queue is running. The caller must not hold any mutex.
void SubmitWaitingTasksToExecutor() LOCKS_EXCLUDED(mutex_); void SubmitWaitingTasksToExecutor() ABSL_LOCKS_EXCLUDED(mutex_);
// Adds a node and a calculator context to the scheduler queue if the node is // Adds a node and a calculator context to the scheduler queue if the node is
// not already running. Note that if the node was running, then it will be // not already running. Note that if the node was running, then it will be
// rescheduled upon completion (after checking dependencies), so this call is // rescheduled upon completion (after checking dependencies), so this call is
// not lost. // not lost.
void AddNode(CalculatorNode* node, CalculatorContext* cc) void AddNode(CalculatorNode* node, CalculatorContext* cc)
LOCKS_EXCLUDED(mutex_); ABSL_LOCKS_EXCLUDED(mutex_);
// Adds a node to the scheduler queue for an OpenNode() call. // Adds a node to the scheduler queue for an OpenNode() call.
void AddNodeForOpen(CalculatorNode* node) LOCKS_EXCLUDED(mutex_); void AddNodeForOpen(CalculatorNode* node) ABSL_LOCKS_EXCLUDED(mutex_);
// Adds an Item to queue_. // Adds an Item to queue_.
void AddItemToQueue(Item&& item); void AddItemToQueue(Item&& item);
void CleanupAfterRun() LOCKS_EXCLUDED(mutex_); void CleanupAfterRun() ABSL_LOCKS_EXCLUDED(mutex_);
private: private:
// Used internally by RunNextTask. Invokes ProcessNode or CloseNode, followed // Used internally by RunNextTask. Invokes ProcessNode or CloseNode, followed
// by EndScheduling. // by EndScheduling.
void RunCalculatorNode(CalculatorNode* node, CalculatorContext* cc) void RunCalculatorNode(CalculatorNode* node, CalculatorContext* cc)
LOCKS_EXCLUDED(mutex_); ABSL_LOCKS_EXCLUDED(mutex_);
// Used internally by RunNextTask. Invokes OpenNode, followed by // Used internally by RunNextTask. Invokes OpenNode, followed by
// CheckIfBecameReady. // CheckIfBecameReady.
void OpenCalculatorNode(CalculatorNode* node) LOCKS_EXCLUDED(mutex_); void OpenCalculatorNode(CalculatorNode* node) ABSL_LOCKS_EXCLUDED(mutex_);
// Checks whether the queue has no queued nodes or pending tasks. // Checks whether the queue has no queued nodes or pending tasks.
bool IsIdle() EXCLUSIVE_LOCKS_REQUIRED(mutex_); bool IsIdle() ABSL_EXCLUSIVE_LOCKS_REQUIRED(mutex_);
Executor* executor_ = nullptr; Executor* executor_ = nullptr;
@ -154,16 +154,16 @@ class SchedulerQueue : public TaskQueue {
// decrements it. The queue is running if running_count_ > 0. A running // decrements it. The queue is running if running_count_ > 0. A running
// queue will submit tasks to the executor. // queue will submit tasks to the executor.
// Invariant: running_count_ <= 1. // Invariant: running_count_ <= 1.
int running_count_ GUARDED_BY(mutex_) = 0; int running_count_ ABSL_GUARDED_BY(mutex_) = 0;
// Number of tasks added to the Executor and not yet complete. // Number of tasks added to the Executor and not yet complete.
int num_pending_tasks_ GUARDED_BY(mutex_); int num_pending_tasks_ ABSL_GUARDED_BY(mutex_);
// Number of tasks that need to be added to the Executor. // Number of tasks that need to be added to the Executor.
int num_tasks_to_add_ GUARDED_BY(mutex_); int num_tasks_to_add_ ABSL_GUARDED_BY(mutex_);
// Queue of nodes that need to be run. // Queue of nodes that need to be run.
std::priority_queue<Item> queue_ GUARDED_BY(mutex_); std::priority_queue<Item> queue_ ABSL_GUARDED_BY(mutex_);
SchedulerShared* const shared_; SchedulerShared* const shared_;

View File

@ -67,7 +67,7 @@ class FixedSizeInputStreamHandler : public DefaultInputStreamHandler {
private: private:
// Drops packets if all input streams exceed trigger_queue_size. // Drops packets if all input streams exceed trigger_queue_size.
void EraseAllSurplus() EXCLUSIVE_LOCKS_REQUIRED(erase_mutex_) { void EraseAllSurplus() ABSL_EXCLUSIVE_LOCKS_REQUIRED(erase_mutex_) {
Timestamp min_timestamp_all_streams = Timestamp::Max(); Timestamp min_timestamp_all_streams = Timestamp::Max();
for (const auto& stream : input_stream_managers_) { for (const auto& stream : input_stream_managers_) {
// Check whether every InputStreamImpl grew beyond trigger_queue_size. // Check whether every InputStreamImpl grew beyond trigger_queue_size.
@ -127,7 +127,8 @@ class FixedSizeInputStreamHandler : public DefaultInputStreamHandler {
// Keeps only the most recent target_queue_size packets in each stream // Keeps only the most recent target_queue_size packets in each stream
// exceeding trigger_queue_size. Also, discards all packets older than the // exceeding trigger_queue_size. Also, discards all packets older than the
// first kept timestamp on any stream. // first kept timestamp on any stream.
void EraseAnySurplus(bool keep_one) EXCLUSIVE_LOCKS_REQUIRED(erase_mutex_) { void EraseAnySurplus(bool keep_one)
ABSL_EXCLUSIVE_LOCKS_REQUIRED(erase_mutex_) {
// Record the most recent first kept timestamp on any stream. // Record the most recent first kept timestamp on any stream.
for (const auto& stream : input_stream_managers_) { for (const auto& stream : input_stream_managers_) {
int32 queue_size = (stream->QueueSize() >= trigger_queue_size_) int32 queue_size = (stream->QueueSize() >= trigger_queue_size_)
@ -151,7 +152,7 @@ class FixedSizeInputStreamHandler : public DefaultInputStreamHandler {
} }
void EraseSurplusPackets(bool keep_one) void EraseSurplusPackets(bool keep_one)
EXCLUSIVE_LOCKS_REQUIRED(erase_mutex_) { ABSL_EXCLUSIVE_LOCKS_REQUIRED(erase_mutex_) {
return (fixed_min_size_) ? EraseAllSurplus() : EraseAnySurplus(keep_one); return (fixed_min_size_) ? EraseAllSurplus() : EraseAnySurplus(keep_one);
} }
@ -218,9 +219,9 @@ class FixedSizeInputStreamHandler : public DefaultInputStreamHandler {
bool fixed_min_size_; bool fixed_min_size_;
// Indicates that GetNodeReadiness has returned kReadyForProcess once, and // Indicates that GetNodeReadiness has returned kReadyForProcess once, and
// the corresponding call to FillInputSet has not yet completed. // the corresponding call to FillInputSet has not yet completed.
bool pending_ GUARDED_BY(erase_mutex_); bool pending_ ABSL_GUARDED_BY(erase_mutex_);
// The timestamp used to truncate all input streams. // The timestamp used to truncate all input streams.
Timestamp kept_timestamp_ GUARDED_BY(erase_mutex_); Timestamp kept_timestamp_ ABSL_GUARDED_BY(erase_mutex_);
absl::Mutex erase_mutex_; absl::Mutex erase_mutex_;
}; };

View File

@ -35,13 +35,13 @@ const int64 kSlowCalculatorRate = 10;
// Rate limiter for TestSlowCalculator. // Rate limiter for TestSlowCalculator.
ABSL_CONST_INIT absl::Mutex g_source_mutex(absl::kConstInit); ABSL_CONST_INIT absl::Mutex g_source_mutex(absl::kConstInit);
int64 g_source_counter GUARDED_BY(g_source_mutex); int64 g_source_counter ABSL_GUARDED_BY(g_source_mutex);
// Rate limiter for TestSourceCalculator. // Rate limiter for TestSourceCalculator.
int64 g_slow_counter GUARDED_BY(g_source_mutex); int64 g_slow_counter ABSL_GUARDED_BY(g_source_mutex);
// Flag that indicates that the source is done. // Flag that indicates that the source is done.
bool g_source_done GUARDED_BY(g_source_mutex); bool g_source_done ABSL_GUARDED_BY(g_source_mutex);
class TestSourceCalculator : public CalculatorBase { class TestSourceCalculator : public CalculatorBase {
public: public:
@ -74,7 +74,7 @@ class TestSourceCalculator : public CalculatorBase {
} }
private: private:
bool CanProceed() const EXCLUSIVE_LOCKS_REQUIRED(g_source_mutex) { bool CanProceed() const ABSL_EXCLUSIVE_LOCKS_REQUIRED(g_source_mutex) {
return g_source_counter <= kSlowCalculatorRate * g_slow_counter || return g_source_counter <= kSlowCalculatorRate * g_slow_counter ||
g_source_counter <= 1; g_source_counter <= 1;
} }
@ -109,7 +109,7 @@ class TestSlowCalculator : public CalculatorBase {
} }
private: private:
bool CanProceed() const EXCLUSIVE_LOCKS_REQUIRED(g_source_mutex) { bool CanProceed() const ABSL_EXCLUSIVE_LOCKS_REQUIRED(g_source_mutex) {
return g_source_counter > kSlowCalculatorRate * g_slow_counter || return g_source_counter > kSlowCalculatorRate * g_slow_counter ||
g_source_done; g_source_done;
} }

View File

@ -40,15 +40,15 @@ class InOrderOutputStreamHandler : public OutputStreamHandler {
options, calculator_run_in_parallel) {} options, calculator_run_in_parallel) {}
private: private:
void PropagationLoop() EXCLUSIVE_LOCKS_REQUIRED(timestamp_mutex_) final; void PropagationLoop() ABSL_EXCLUSIVE_LOCKS_REQUIRED(timestamp_mutex_) final;
void PropagatePackets(CalculatorContext** calculator_context, void PropagatePackets(CalculatorContext** calculator_context,
Timestamp* context_timestamp) Timestamp* context_timestamp)
EXCLUSIVE_LOCKS_REQUIRED(timestamp_mutex_); ABSL_EXCLUSIVE_LOCKS_REQUIRED(timestamp_mutex_);
void PropagationBound(CalculatorContext** calculator_context, void PropagationBound(CalculatorContext** calculator_context,
Timestamp* context_timestamp) Timestamp* context_timestamp)
EXCLUSIVE_LOCKS_REQUIRED(timestamp_mutex_); ABSL_EXCLUSIVE_LOCKS_REQUIRED(timestamp_mutex_);
}; };
} // namespace mediapipe } // namespace mediapipe

View File

@ -64,18 +64,18 @@ class SyncSetInputStreamHandler : public InputStreamHandler {
// Populates timestamp bounds for streams outside the ready sync-set. // Populates timestamp bounds for streams outside the ready sync-set.
void FillInputBounds(Timestamp input_timestamp, void FillInputBounds(Timestamp input_timestamp,
InputStreamShardSet* input_set) InputStreamShardSet* input_set)
EXCLUSIVE_LOCKS_REQUIRED(mutex_); ABSL_EXCLUSIVE_LOCKS_REQUIRED(mutex_);
private: private:
absl::Mutex mutex_; absl::Mutex mutex_;
// The ids of each set of inputs. // The ids of each set of inputs.
std::vector<std::vector<CollectionItemId>> sync_sets_ GUARDED_BY(mutex_); std::vector<std::vector<CollectionItemId>> sync_sets_ ABSL_GUARDED_BY(mutex_);
// The index of the ready sync set. A value of -1 indicates that no // The index of the ready sync set. A value of -1 indicates that no
// sync sets are ready. // sync sets are ready.
int ready_sync_set_index_ GUARDED_BY(mutex_) = -1; int ready_sync_set_index_ ABSL_GUARDED_BY(mutex_) = -1;
// The timestamp at which the sync set is ready. If no sync set is // The timestamp at which the sync set is ready. If no sync set is
// ready then this variable should be Timestamp::Done() . // ready then this variable should be Timestamp::Done() .
Timestamp ready_timestamp_ GUARDED_BY(mutex_); Timestamp ready_timestamp_ ABSL_GUARDED_BY(mutex_);
}; };
REGISTER_INPUT_STREAM_HANDLER(SyncSetInputStreamHandler); REGISTER_INPUT_STREAM_HANDLER(SyncSetInputStreamHandler);

View File

@ -79,7 +79,7 @@ class TimestampAlignInputStreamHandler : public InputStreamHandler {
CollectionItemId timestamp_base_stream_id_; CollectionItemId timestamp_base_stream_id_;
absl::Mutex mutex_; absl::Mutex mutex_;
bool offsets_initialized_ GUARDED_BY(mutex_) = false; bool offsets_initialized_ ABSL_GUARDED_BY(mutex_) = false;
std::vector<TimestampDiff> timestamp_offsets_; std::vector<TimestampDiff> timestamp_offsets_;
}; };
REGISTER_INPUT_STREAM_HANDLER(TimestampAlignInputStreamHandler); REGISTER_INPUT_STREAM_HANDLER(TimestampAlignInputStreamHandler);

View File

@ -244,6 +244,7 @@ cc_library(
hdrs = ["subgraph_expansion.h"], hdrs = ["subgraph_expansion.h"],
visibility = ["//visibility:public"], visibility = ["//visibility:public"],
deps = [ deps = [
":name_util",
":tag_map", ":tag_map",
"//mediapipe/framework:calculator_cc_proto", "//mediapipe/framework:calculator_cc_proto",
"//mediapipe/framework:packet_generator", "//mediapipe/framework:packet_generator",

View File

@ -68,5 +68,30 @@ std::string GetUnusedSidePacketName(
return candidate; return candidate;
} }
std::string CanonicalNodeName(const CalculatorGraphConfig& graph_config,
int node_id) {
const auto& node_config = graph_config.node(node_id);
std::string node_name = node_config.name().empty() ? node_config.calculator()
: node_config.name();
int count = 0;
int sequence = 0;
for (int i = 0; i < graph_config.node_size(); i++) {
const auto& current_node_config = graph_config.node(i);
std::string current_node_name = current_node_config.name().empty()
? current_node_config.calculator()
: current_node_config.name();
if (node_name == current_node_name) {
++count;
if (i < node_id) {
++sequence;
}
}
}
if (count <= 1) {
return node_name;
}
return absl::StrCat(node_name, "_", sequence + 1);
}
} // namespace tool } // namespace tool
} // namespace mediapipe } // namespace mediapipe

View File

@ -31,7 +31,53 @@ std::string GetUnusedSidePacketName(const CalculatorGraphConfig& /*config*/,
std::string GetUnusedNodeName(const CalculatorGraphConfig& config, std::string GetUnusedNodeName(const CalculatorGraphConfig& config,
const std::string& node_name_base); const std::string& node_name_base);
// Returns a short unique name for a Node in a CalculatorGraphConfig.
// This is the Node.name (if specified) or the Node.calculator.
// If there are multiple calculators with similar name in the graph, the name
// will be postfixed by "_<COUNT>". For example, in the following graph the node
// names will be as mentiond.
//
// node { // Name will be "CalcA"
// calculator: "CalcA"
// }
// node { // Name will be "NameB"
// calculator: "CalcB"
// name: "NameB"
// }
// node { // Name will be "CalcC_1" due to duplicate "calculator" field.
// calculator: "CalcC"
// }
// node { // Name will be "CalcC_2" due to duplicate "calculator" field.
// calculator: "CalcC"
// }
// node { // Name will be "NameX".
// calculator: "CalcD"
// name: "NameX"
// }
// node { // Name will be "NameY".
// calculator: "CalcD"
// name: "NameY"
// }
// node { // Name will be "NameZ_1". due to "name" field duplicate.
// calculator: "CalcE"
// name: "NameZ"
// }
// node { // Name will be "NameZ_2". due to "name" field duplicate.
// calculator: "CalcF"
// name: "NameZ"
// }
//
// TODO: Update GraphNode.UniqueName in MediaPipe Visualizer to match
// this logic.
// TODO: Fix the edge case mentioned in the bug.
std::string CanonicalNodeName(const CalculatorGraphConfig& graph_config,
int node_id);
} // namespace tool } // namespace tool
} // namespace mediapipe } // namespace mediapipe
namespace mediapipe {
using ::mediapipe::tool::CanonicalNodeName;
} // namespace mediapipe
#endif // MEDIAPIPE_FRAMEWORK_TOOL_NAME_UTIL_H_ #endif // MEDIAPIPE_FRAMEWORK_TOOL_NAME_UTIL_H_

View File

@ -15,10 +15,16 @@
#include "mediapipe/framework/tool/simulation_clock.h" #include "mediapipe/framework/tool/simulation_clock.h"
#include "absl/synchronization/mutex.h" #include "absl/synchronization/mutex.h"
#include "absl/time/time.h"
#include "mediapipe/framework/port/logging.h" #include "mediapipe/framework/port/logging.h"
namespace mediapipe { namespace mediapipe {
SimulationClock::~SimulationClock() {
ThreadStart();
ThreadFinish();
}
absl::Time SimulationClock::TimeNow() { absl::Time SimulationClock::TimeNow() {
absl::MutexLock l(&time_mutex_); absl::MutexLock l(&time_mutex_);
return time_; return time_;

View File

@ -39,7 +39,7 @@ namespace mediapipe {
class SimulationClock : public mediapipe::Clock { class SimulationClock : public mediapipe::Clock {
public: public:
SimulationClock() {} SimulationClock() {}
~SimulationClock() override {} ~SimulationClock() override;
// Returns the simulated time. // Returns the simulated time.
absl::Time TimeNow() override; absl::Time TimeNow() override;
@ -59,9 +59,9 @@ class SimulationClock : public mediapipe::Clock {
protected: protected:
// Queue up wake up waiter. // Queue up wake up waiter.
void SleepInternal(absl::Time wakeup_time) void SleepInternal(absl::Time wakeup_time)
EXCLUSIVE_LOCKS_REQUIRED(time_mutex_); ABSL_EXCLUSIVE_LOCKS_REQUIRED(time_mutex_);
// Advances to the next wake up time if no related threads are running. // Advances to the next wake up time if no related threads are running.
void TryAdvanceTime() EXCLUSIVE_LOCKS_REQUIRED(time_mutex_); void TryAdvanceTime() ABSL_EXCLUSIVE_LOCKS_REQUIRED(time_mutex_);
// Represents a thread blocked in SleepUntil. // Represents a thread blocked in SleepUntil.
struct Waiter { struct Waiter {
@ -71,9 +71,9 @@ class SimulationClock : public mediapipe::Clock {
protected: protected:
absl::Mutex time_mutex_; absl::Mutex time_mutex_;
absl::Time time_ GUARDED_BY(time_mutex_); absl::Time time_ ABSL_GUARDED_BY(time_mutex_);
std::multimap<absl::Time, Waiter*> waiters_ GUARDED_BY(time_mutex_); std::multimap<absl::Time, Waiter*> waiters_ ABSL_GUARDED_BY(time_mutex_);
int num_running_ GUARDED_BY(time_mutex_) = 0; int num_running_ ABSL_GUARDED_BY(time_mutex_) = 0;
}; };
} // namespace mediapipe } // namespace mediapipe

View File

@ -242,5 +242,59 @@ TEST_F(SimulationClockTest, InFlight) {
ElementsAre(10000, 20000, 40000, 60000, 70000, 100000)); ElementsAre(10000, 20000, 40000, 60000, 70000, 100000));
} }
// Shows successful destruction of CalculatorGraph, SimulationClockExecutor,
// and SimulationClock. With tsan, this test reveals a race condition unless
// the SimulationClock destructor calls ThreadFinish to waits for all threads.
TEST_F(SimulationClockTest, DestroyClock) {
auto graph_config = ParseTextProtoOrDie<CalculatorGraphConfig>(R"(
node {
calculator: "LambdaCalculator"
input_side_packet: 'callback_0'
output_stream: "input_1"
}
node {
calculator: "LambdaCalculator"
input_side_packet: 'callback_1'
input_stream: "input_1"
output_stream: "output_1"
}
)");
int input_count = 0;
ProcessFunction wait_0 = [&](const InputStreamShardSet& inputs,
OutputStreamShardSet* outputs) {
clock_->Sleep(absl::Microseconds(20000));
if (++input_count < 4) {
outputs->Index(0).AddPacket(
MakePacket<uint64>(input_count).At(Timestamp(input_count)));
return ::mediapipe::OkStatus();
} else {
return tool::StatusStop();
}
};
ProcessFunction wait_1 = [&](const InputStreamShardSet& inputs,
OutputStreamShardSet* outputs) {
clock_->Sleep(absl::Microseconds(30000));
return PassThrough(inputs, outputs);
};
std::vector<Packet> out_packets;
::mediapipe::Status status;
{
CalculatorGraph graph;
auto executor = std::make_shared<SimulationClockExecutor>(4);
clock_ = executor->GetClock().get();
MP_ASSERT_OK(graph.SetExecutor("", executor));
tool::AddVectorSink("output_1", &graph_config, &out_packets);
MP_ASSERT_OK(graph.Initialize(graph_config,
{
{"callback_0", Adopt(new auto(wait_0))},
{"callback_1", Adopt(new auto(wait_1))},
}));
MP_EXPECT_OK(graph.Run());
}
EXPECT_EQ(out_packets.size(), 3);
}
} // namespace } // namespace
} // namespace mediapipe } // namespace mediapipe

View File

@ -35,6 +35,7 @@
#include "mediapipe/framework/port/status_macros.h" #include "mediapipe/framework/port/status_macros.h"
#include "mediapipe/framework/status_handler.pb.h" #include "mediapipe/framework/status_handler.pb.h"
#include "mediapipe/framework/subgraph.h" #include "mediapipe/framework/subgraph.h"
#include "mediapipe/framework/tool/name_util.h"
#include "mediapipe/framework/tool/tag_map.h" #include "mediapipe/framework/tool/tag_map.h"
namespace mediapipe { namespace mediapipe {
@ -95,6 +96,13 @@ namespace tool {
config->mutable_output_side_packet()}) { config->mutable_output_side_packet()}) {
MP_RETURN_IF_ERROR(TransformStreamNames(streams, transform)); MP_RETURN_IF_ERROR(TransformStreamNames(streams, transform));
} }
std::vector<std::string> node_names(config->node_size());
for (int node_id = 0; node_id < config->node_size(); ++node_id) {
node_names[node_id] = CanonicalNodeName(*config, node_id);
}
for (int node_id = 0; node_id < config->node_size(); ++node_id) {
config->mutable_node(node_id)->set_name(transform(node_names[node_id]));
}
for (auto& node : *config->mutable_node()) { for (auto& node : *config->mutable_node()) {
for (auto* streams : for (auto* streams :
{node.mutable_input_stream(), node.mutable_output_stream(), {node.mutable_input_stream(), node.mutable_output_stream(),
@ -102,9 +110,6 @@ namespace tool {
node.mutable_output_side_packet()}) { node.mutable_output_side_packet()}) {
MP_RETURN_IF_ERROR(TransformStreamNames(streams, transform)); MP_RETURN_IF_ERROR(TransformStreamNames(streams, transform));
} }
if (!node.name().empty()) {
node.set_name(transform(node.name()));
}
} }
for (auto& generator : *config->mutable_packet_generator()) { for (auto& generator : *config->mutable_packet_generator()) {
for (auto* streams : {generator.mutable_input_side_packet(), for (auto* streams : {generator.mutable_input_side_packet(),
@ -120,21 +125,18 @@ namespace tool {
} }
// Adds a prefix to the name of each stream, side packet and node in the // Adds a prefix to the name of each stream, side packet and node in the
// config. Each call to this method should use a different subgraph_index // config. Each call to this method should use a different prefix. For example:
// to produce a different numerical prefix. For example: // 1, { foo, bar } --PrefixNames-> { qsg__foo, qsg__bar }
// 1, { foo, bar } --PrefixNames-> { __sg_1_foo, __sg_1_bar } // 2, { foo, bar } --PrefixNames-> { rsg__foo, rsg__bar }
// 2, { foo, bar } --PrefixNames-> { __sg_2_foo, __sg_2_bar }
// This means that two copies of the same subgraph will not interfere with // This means that two copies of the same subgraph will not interfere with
// each other. // each other.
static ::mediapipe::Status PrefixNames(int subgraph_index, static ::mediapipe::Status PrefixNames(std::string prefix,
CalculatorGraphConfig* config) { CalculatorGraphConfig* config) {
// TODO: prefix with subgraph name instead (see cl/157677233 std::transform(prefix.begin(), prefix.end(), prefix.begin(), ::tolower);
// discussion). std::replace(prefix.begin(), prefix.end(), '.', '_');
// TODO: since we expand nested subgraphs outside-in, we should std::replace(prefix.begin(), prefix.end(), ' ', '_');
// append the prefix to the existing prefix, if any. This is unimportant std::replace(prefix.begin(), prefix.end(), ':', '_');
// with the meaningless prefix we use now, but it should be considered absl::StrAppend(&prefix, "__");
// when prefixing with names.
std::string prefix = absl::StrCat("__sg", subgraph_index, "_");
auto add_prefix = [&prefix](absl::string_view s) { auto add_prefix = [&prefix](absl::string_view s) {
return absl::StrCat(prefix, s); return absl::StrCat(prefix, s);
}; };
@ -271,7 +273,6 @@ static ::mediapipe::Status PrefixNames(int subgraph_index,
graph_registry ? graph_registry : &GraphRegistry::global_graph_registry; graph_registry ? graph_registry : &GraphRegistry::global_graph_registry;
RET_CHECK(config); RET_CHECK(config);
auto* nodes = config->mutable_node(); auto* nodes = config->mutable_node();
int subgraph_counter = 0;
while (1) { while (1) {
auto subgraph_nodes_start = std::stable_partition( auto subgraph_nodes_start = std::stable_partition(
nodes->begin(), nodes->end(), nodes->begin(), nodes->end(),
@ -283,11 +284,13 @@ static ::mediapipe::Status PrefixNames(int subgraph_index,
std::vector<CalculatorGraphConfig> subgraphs; std::vector<CalculatorGraphConfig> subgraphs;
for (auto it = subgraph_nodes_start; it != nodes->end(); ++it) { for (auto it = subgraph_nodes_start; it != nodes->end(); ++it) {
const auto& node = *it; const auto& node = *it;
int node_id = it - nodes->begin();
std::string node_name = CanonicalNodeName(*config, node_id);
MP_RETURN_IF_ERROR(ValidateSubgraphFields(node)); MP_RETURN_IF_ERROR(ValidateSubgraphFields(node));
ASSIGN_OR_RETURN(auto subgraph, ASSIGN_OR_RETURN(auto subgraph,
graph_registry->CreateByName(config->package(), graph_registry->CreateByName(config->package(),
node.calculator(), &node)); node.calculator(), &node));
MP_RETURN_IF_ERROR(PrefixNames(subgraph_counter++, &subgraph)); MP_RETURN_IF_ERROR(PrefixNames(node_name, &subgraph));
MP_RETURN_IF_ERROR(ConnectSubgraphStreams(node, &subgraph)); MP_RETURN_IF_ERROR(ConnectSubgraphStreams(node, &subgraph));
subgraphs.push_back(subgraph); subgraphs.push_back(subgraph);
} }

View File

@ -250,6 +250,7 @@ TEST(SubgraphExpansionTest, TransformNames) {
output_stream: "__sg0_output_1" output_stream: "__sg0_output_1"
} }
node { node {
name: "__sg0_SomeRegularCalculator"
calculator: "SomeRegularCalculator" calculator: "SomeRegularCalculator"
input_stream: "__sg0_output_1" input_stream: "__sg0_output_1"
output_stream: "__sg0_output_2" output_stream: "__sg0_output_2"
@ -438,20 +439,20 @@ TEST(SubgraphExpansionTest, ExpandSubgraphs) {
output_stream: "foo" output_stream: "foo"
} }
node { node {
name: "__sg0_regular_node" name: "testsubgraph__regular_node"
calculator: "SomeRegularCalculator" calculator: "SomeRegularCalculator"
input_stream: "foo" input_stream: "foo"
output_stream: "__sg0_stream_a" output_stream: "testsubgraph__stream_a"
input_side_packet: "__sg0_side" input_side_packet: "testsubgraph__side"
} }
node { node {
name: "__sg0_simple_sink" name: "testsubgraph__simple_sink"
calculator: "SomeSinkCalculator" calculator: "SomeSinkCalculator"
input_stream: "__sg0_stream_a" input_stream: "testsubgraph__stream_a"
} }
packet_generator { packet_generator {
packet_generator: "SomePacketGenerator" packet_generator: "SomePacketGenerator"
output_side_packet: "__sg0_side" output_side_packet: "testsubgraph__side"
} }
)"); )");
MP_EXPECT_OK(tool::ExpandSubgraphs(&supergraph)); MP_EXPECT_OK(tool::ExpandSubgraphs(&supergraph));
@ -503,23 +504,24 @@ TEST(SubgraphExpansionTest, ExecutorFieldOfNodeInSubgraphPreserved) {
output_stream: "OUT:output" output_stream: "OUT:output"
} }
)"); )");
CalculatorGraphConfig expected_graph = CalculatorGraphConfig expected_graph = ::mediapipe::ParseTextProtoOrDie<
::mediapipe::ParseTextProtoOrDie<CalculatorGraphConfig>(R"( CalculatorGraphConfig>(R"(
input_stream: "input" input_stream: "input"
executor { executor {
name: "custom_thread_pool" name: "custom_thread_pool"
type: "ThreadPoolExecutor" type: "ThreadPoolExecutor"
options { options {
[mediapipe.ThreadPoolExecutorOptions.ext] { num_threads: 4 } [mediapipe.ThreadPoolExecutorOptions.ext] { num_threads: 4 }
} }
} }
node { node {
calculator: "PassThroughCalculator" calculator: "PassThroughCalculator"
input_stream: "input" name: "enclosingsubgraph__nodewithexecutorsubgraph__PassThroughCalculator"
output_stream: "output" input_stream: "input"
executor: "custom_thread_pool" output_stream: "output"
} executor: "custom_thread_pool"
)"); }
)");
MP_EXPECT_OK(tool::ExpandSubgraphs(&supergraph)); MP_EXPECT_OK(tool::ExpandSubgraphs(&supergraph));
EXPECT_THAT(supergraph, mediapipe::EqualsProto(expected_graph)); EXPECT_THAT(supergraph, mediapipe::EqualsProto(expected_graph));
} }

View File

@ -39,6 +39,7 @@
#include "mediapipe/framework/status_handler.h" #include "mediapipe/framework/status_handler.h"
#include "mediapipe/framework/stream_handler.pb.h" #include "mediapipe/framework/stream_handler.pb.h"
#include "mediapipe/framework/thread_pool_executor.pb.h" #include "mediapipe/framework/thread_pool_executor.pb.h"
#include "mediapipe/framework/tool/name_util.h"
#include "mediapipe/framework/tool/status_util.h" #include "mediapipe/framework/tool/status_util.h"
#include "mediapipe/framework/tool/subgraph_expansion.h" #include "mediapipe/framework/tool/subgraph_expansion.h"
#include "mediapipe/framework/tool/validate.h" #include "mediapipe/framework/tool/validate.h"
@ -169,31 +170,6 @@ std::string DebugName(const CalculatorGraphConfig& config,
} // namespace } // namespace
std::string CanonicalNodeName(const CalculatorGraphConfig& graph_config,
int node_id) {
const auto& node_config = graph_config.node(node_id);
std::string node_name = node_config.name().empty() ? node_config.calculator()
: node_config.name();
int count = 0;
int sequence = 0;
for (int i = 0; i < graph_config.node_size(); i++) {
const auto& current_node_config = graph_config.node(i);
std::string current_node_name = current_node_config.name().empty()
? current_node_config.calculator()
: current_node_config.name();
if (node_name == current_node_name) {
++count;
if (i < node_id) {
++sequence;
}
}
}
if (count <= 1) {
return node_name;
}
return absl::StrCat(node_name, "_", sequence + 1);
}
// static // static
std::string NodeTypeInfo::NodeTypeToString(NodeType node_type) { std::string NodeTypeInfo::NodeTypeToString(NodeType node_type) {
switch (node_type) { switch (node_type) {

View File

@ -33,48 +33,6 @@ namespace mediapipe {
class ValidatedGraphConfig; class ValidatedGraphConfig;
// Returns a short unique name for a Node in a CalculatorGraphConfig.
// This is the Node.name (if specified) or the Node.calculator.
// If there are multiple calculators with similar name in the graph, the name
// will be postfixed by "_<COUNT>". For example, in the following graph the node
// names will be as mentiond.
//
// node { // Name will be "CalcA"
// calculator: "CalcA"
// }
// node { // Name will be "NameB"
// calculator: "CalcB"
// name: "NameB"
// }
// node { // Name will be "CalcC_1" due to duplicate "calculator" field.
// calculator: "CalcC"
// }
// node { // Name will be "CalcC_2" due to duplicate "calculator" field.
// calculator: "CalcC"
// }
// node { // Name will be "NameX".
// calculator: "CalcD"
// name: "NameX"
// }
// node { // Name will be "NameY".
// calculator: "CalcD"
// name: "NameY"
// }
// node { // Name will be "NameZ_1". due to "name" field duplicate.
// calculator: "CalcE"
// name: "NameZ"
// }
// node { // Name will be "NameZ_2". due to "name" field duplicate.
// calculator: "CalcF"
// name: "NameZ"
// }
//
// TODO: Update GraphNode.UniqueName in MediaPipe Visualizer to match
// this logic.
// TODO: Fix the edge case mentioned in the bug.
std::string CanonicalNodeName(const CalculatorGraphConfig& graph_config,
int node_id);
// Type information for a graph node (Calculator, Generator, etc). // Type information for a graph node (Calculator, Generator, etc).
class NodeTypeInfo { class NodeTypeInfo {
public: public:

View File

@ -920,6 +920,7 @@ objc_library(
ios_unit_test( ios_unit_test(
name = "gl_ios_test", name = "gl_ios_test",
minimum_os_version = MIN_IOS_VERSION, minimum_os_version = MIN_IOS_VERSION,
runner = "//googlemac/iPhone/Shared/Testing/EarlGrey/Runner:IOS_LATEST",
tags = [ tags = [
"ios", "ios",
], ],

View File

@ -29,9 +29,9 @@ struct EglSurfaceHolder {
// GlCalculatorHelper::RunInGlContext while holding this mutex, but instead // GlCalculatorHelper::RunInGlContext while holding this mutex, but instead
// grab this inside the callable passed to them. // grab this inside the callable passed to them.
absl::Mutex mutex; absl::Mutex mutex;
EGLSurface surface GUARDED_BY(mutex) = EGL_NO_SURFACE; EGLSurface surface ABSL_GUARDED_BY(mutex) = EGL_NO_SURFACE;
// True if MediaPipe created the surface and is responsible for destroying it. // True if MediaPipe created the surface and is responsible for destroying it.
bool owned GUARDED_BY(mutex) = false; bool owned ABSL_GUARDED_BY(mutex) = false;
// Vertical flip of the surface, useful for conversion between coordinate // Vertical flip of the surface, useful for conversion between coordinate
// systems with top-left v.s. bottom-left origins. // systems with top-left v.s. bottom-left origins.
bool flip_y = false; bool flip_y = false;

View File

@ -346,7 +346,7 @@ std::weak_ptr<GlContext>& GlContext::CurrentContext() {
::mediapipe::Status GlContext::SwitchContext(ContextBinding* saved_context, ::mediapipe::Status GlContext::SwitchContext(ContextBinding* saved_context,
const ContextBinding& new_context) const ContextBinding& new_context)
NO_THREAD_SAFETY_ANALYSIS { ABSL_NO_THREAD_SAFETY_ANALYSIS {
std::shared_ptr<GlContext> old_context_obj = CurrentContext().lock(); std::shared_ptr<GlContext> old_context_obj = CurrentContext().lock();
std::shared_ptr<GlContext> new_context_obj = std::shared_ptr<GlContext> new_context_obj =
new_context.context_object.lock(); new_context.context_object.lock();

View File

@ -379,7 +379,7 @@ class GlContext : public std::enable_shared_from_this<GlContext> {
// This mutex is used to guard a few different members and condition // This mutex is used to guard a few different members and condition
// variables. It should only be held for a short time. // variables. It should only be held for a short time.
absl::Mutex mutex_; absl::Mutex mutex_;
absl::CondVar wait_for_gl_finish_cv_ GUARDED_BY(mutex_); absl::CondVar wait_for_gl_finish_cv_ ABSL_GUARDED_BY(mutex_);
std::unique_ptr<mediapipe::GlProfilingHelper> profiling_helper_ = nullptr; std::unique_ptr<mediapipe::GlProfilingHelper> profiling_helper_ = nullptr;
}; };

View File

@ -52,11 +52,11 @@ class GlContext::DedicatedThread {
absl::Mutex mutex_; absl::Mutex mutex_;
// Used to wait for a job's completion. // Used to wait for a job's completion.
absl::CondVar gl_job_done_cv_ GUARDED_BY(mutex_); absl::CondVar gl_job_done_cv_ ABSL_GUARDED_BY(mutex_);
pthread_t gl_thread_id_; pthread_t gl_thread_id_;
std::deque<Job> jobs_ GUARDED_BY(mutex_); std::deque<Job> jobs_ ABSL_GUARDED_BY(mutex_);
absl::CondVar has_jobs_cv_ GUARDED_BY(mutex_); absl::CondVar has_jobs_cv_ ABSL_GUARDED_BY(mutex_);
bool self_destruct_ = false; bool self_destruct_ = false;
}; };

View File

@ -60,7 +60,7 @@ class GlTextureBufferPool
// If the total number of buffers is greater than keep_count, destroys any // If the total number of buffers is greater than keep_count, destroys any
// surplus buffers that are no longer in use. // surplus buffers that are no longer in use.
void TrimAvailable() EXCLUSIVE_LOCKS_REQUIRED(mutex_); void TrimAvailable() ABSL_EXCLUSIVE_LOCKS_REQUIRED(mutex_);
const int width_; const int width_;
const int height_; const int height_;
@ -68,8 +68,9 @@ class GlTextureBufferPool
const int keep_count_; const int keep_count_;
absl::Mutex mutex_; absl::Mutex mutex_;
int in_use_count_ GUARDED_BY(mutex_) = 0; int in_use_count_ ABSL_GUARDED_BY(mutex_) = 0;
std::vector<std::unique_ptr<GlTextureBuffer>> available_ GUARDED_BY(mutex_); std::vector<std::unique_ptr<GlTextureBuffer>> available_
ABSL_GUARDED_BY(mutex_);
}; };
} // namespace mediapipe } // namespace mediapipe

View File

@ -61,7 +61,7 @@ class GlThreadCollector {
} }
absl::Mutex mutex_; absl::Mutex mutex_;
int active_threads_ GUARDED_BY(mutex_) = 0; int active_threads_ ABSL_GUARDED_BY(mutex_) = 0;
friend NoDestructor<GlThreadCollector>; friend NoDestructor<GlThreadCollector>;
}; };
#else #else

View File

@ -107,7 +107,7 @@ class GpuBufferMultiPool {
absl::Mutex mutex_; absl::Mutex mutex_;
std::unordered_map<BufferSpec, SimplePool, BufferSpecHash> pools_ std::unordered_map<BufferSpec, SimplePool, BufferSpecHash> pools_
GUARDED_BY(mutex_); ABSL_GUARDED_BY(mutex_);
// A queue of BufferSpecs to keep track of the age of each BufferSpec added to // A queue of BufferSpecs to keep track of the age of each BufferSpec added to
// the pool. // the pool.
std::queue<BufferSpec> buffer_specs_; std::queue<BufferSpec> buffer_specs_;

View File

@ -51,7 +51,7 @@ node {
} }
} }
# Runs a TensorFlow Lite model on GPU that takes an image tensor and outputs a # Runs a TensorFlow Lite model on CPU that takes an image tensor and outputs a
# vector of tensors representing, for instance, detection boxes/keypoints and # vector of tensors representing, for instance, detection boxes/keypoints and
# scores. # scores.
node { node {

View File

@ -15,10 +15,11 @@
package com.google.mediapipe.components; package com.google.mediapipe.components;
import android.media.AudioFormat; import android.media.AudioFormat;
import java.nio.ByteBuffer;
/** Lightweight abstraction for an object that can receive audio data. */ /** Lightweight abstraction for an object that can receive audio data. */
public interface AudioDataConsumer { public interface AudioDataConsumer {
/** Called when a new audio data buffer is available. */ /** Called when a new audio data buffer is available. */
public abstract void onNewAudioData( public abstract void onNewAudioData(
byte[] audioData, long timestampMicros, AudioFormat audioFormat); ByteBuffer audioData, long timestampMicros, AudioFormat audioFormat);
} }

View File

@ -23,6 +23,7 @@ import android.hardware.camera2.CameraCharacteristics;
import android.hardware.camera2.CameraManager; import android.hardware.camera2.CameraManager;
import android.hardware.camera2.CameraMetadata; import android.hardware.camera2.CameraMetadata;
import android.hardware.camera2.params.StreamConfigurationMap; import android.hardware.camera2.params.StreamConfigurationMap;
import android.os.SystemClock;
import android.util.Log; import android.util.Log;
import android.util.Size; import android.util.Size;
import androidx.camera.core.CameraX; import androidx.camera.core.CameraX;
@ -31,6 +32,7 @@ import androidx.camera.core.Preview;
import androidx.camera.core.PreviewConfig; import androidx.camera.core.PreviewConfig;
import java.util.Arrays; import java.util.Arrays;
import java.util.List; import java.util.List;
import javax.annotation.Nullable;
/** /**
* Uses CameraX APIs for camera setup and access. * Uses CameraX APIs for camera setup and access.
@ -43,6 +45,9 @@ public class CameraXPreviewHelper extends CameraHelper {
// Target frame and view resolution size in landscape. // Target frame and view resolution size in landscape.
private static final Size TARGET_SIZE = new Size(1280, 720); private static final Size TARGET_SIZE = new Size(1280, 720);
// Number of attempts for calculating the offset between the camera's clock and MONOTONIC clock.
private static final int CLOCK_OFFSET_CALIBRATION_ATTEMPTS = 3;
private Preview preview; private Preview preview;
// Size of the camera-preview frames from the camera. // Size of the camera-preview frames from the camera.
@ -50,9 +55,16 @@ public class CameraXPreviewHelper extends CameraHelper {
// Rotation of the camera-preview frames in degrees. // Rotation of the camera-preview frames in degrees.
private int frameRotation; private int frameRotation;
// Focal length resolved in pixels on the frame texture. @Nullable private CameraCharacteristics cameraCharacteristics = null;
private float focalLengthPixels;
private CameraCharacteristics cameraCharacteristics = null; // Focal length resolved in pixels on the frame texture. If it cannot be determined, this value
// is Float.MIN_VALUE.
private float focalLengthPixels = Float.MIN_VALUE;
// Timestamp source of camera. This is retrieved from
// CameraCharacteristics.SENSOR_INFO_TIMESTAMP_SOURCE. When CameraCharacteristics is not available
// the source is CameraCharacteristics.SENSOR_INFO_TIMESTAMP_SOURCE_UNKNOWN.
private int cameraTimestampSource = CameraCharacteristics.SENSOR_INFO_TIMESTAMP_SOURCE_UNKNOWN;
@Override @Override
@SuppressWarnings("RestrictTo") // See b/132705545. @SuppressWarnings("RestrictTo") // See b/132705545.
@ -78,11 +90,21 @@ public class CameraXPreviewHelper extends CameraHelper {
return; return;
} }
} }
Integer selectedLensFacing = Integer selectedLensFacing =
cameraFacing == CameraHelper.CameraFacing.FRONT cameraFacing == CameraHelper.CameraFacing.FRONT
? CameraMetadata.LENS_FACING_FRONT ? CameraMetadata.LENS_FACING_FRONT
: CameraMetadata.LENS_FACING_BACK; : CameraMetadata.LENS_FACING_BACK;
calculateFocalLength(context, selectedLensFacing); cameraCharacteristics = getCameraCharacteristics(context, selectedLensFacing);
if (cameraCharacteristics != null) {
// Queries camera timestamp source. It should be one of REALTIME or UNKNOWN as
// documented in
// https://developer.android.com/reference/android/hardware/camera2/CameraCharacteristics.html#SENSOR_INFO_TIMESTAMP_SOURCE.
cameraTimestampSource =
cameraCharacteristics.get(CameraCharacteristics.SENSOR_INFO_TIMESTAMP_SOURCE);
focalLengthPixels = calculateFocalLengthInPixels();
}
if (onCameraStartedListener != null) { if (onCameraStartedListener != null) {
onCameraStartedListener.onCameraStarted(previewOutput.getSurfaceTexture()); onCameraStartedListener.onCameraStarted(previewOutput.getSurfaceTexture());
} }
@ -108,6 +130,7 @@ public class CameraXPreviewHelper extends CameraHelper {
return optimalSize != null ? optimalSize : frameSize; return optimalSize != null ? optimalSize : frameSize;
} }
@Nullable
private Size getOptimalViewSize(Size targetSize) { private Size getOptimalViewSize(Size targetSize) {
if (cameraCharacteristics != null) { if (cameraCharacteristics != null) {
StreamConfigurationMap map = StreamConfigurationMap map =
@ -142,11 +165,70 @@ public class CameraXPreviewHelper extends CameraHelper {
return null; return null;
} }
// Computes the difference between the camera's clock and MONOTONIC clock using camera's
// timestamp source information. This function assumes by default that the camera timestamp
// source is aligned to CLOCK_MONOTONIC. This is useful when the camera is being used
// synchronously with other sensors that yield timestamps in the MONOTONIC timebase, such as
// AudioRecord for audio data. The offset is returned in nanoseconds.
public long getTimeOffsetToMonoClockNanos() {
if (cameraTimestampSource == CameraMetadata.SENSOR_INFO_TIMESTAMP_SOURCE_REALTIME) {
// This clock shares the same timebase as SystemClock.elapsedRealtimeNanos(), see
// https://developer.android.com/reference/android/hardware/camera2/CameraMetadata.html#SENSOR_INFO_TIMESTAMP_SOURCE_REALTIME.
return getOffsetFromRealtimeTimestampSource();
} else {
return getOffsetFromUnknownTimestampSource();
}
}
private static long getOffsetFromUnknownTimestampSource() {
// Implementation-wise, this timestamp source has the same timebase as CLOCK_MONOTONIC, see
// https://stackoverflow.com/questions/38585761/what-is-the-timebase-of-the-timestamp-of-cameradevice.
return 0L;
}
private static long getOffsetFromRealtimeTimestampSource() {
// Measure the offset of the REALTIME clock w.r.t. the MONOTONIC clock. Do
// CLOCK_OFFSET_CALIBRATION_ATTEMPTS measurements and choose the offset computed with the
// smallest delay between measurements. When the camera returns a timestamp ts, the
// timestamp in MONOTONIC timebase will now be (ts + cameraTimeOffsetToMonoClock).
long offset = Long.MAX_VALUE;
long lowestGap = Long.MAX_VALUE;
for (int i = 0; i < CLOCK_OFFSET_CALIBRATION_ATTEMPTS; ++i) {
long startMonoTs = System.nanoTime();
long realTs = SystemClock.elapsedRealtimeNanos();
long endMonoTs = System.nanoTime();
long gapMonoTs = endMonoTs - startMonoTs;
if (gapMonoTs < lowestGap) {
lowestGap = gapMonoTs;
offset = (startMonoTs + endMonoTs) / 2 - realTs;
}
}
return offset;
}
public float getFocalLengthPixels() { public float getFocalLengthPixels() {
return focalLengthPixels; return focalLengthPixels;
} }
private void calculateFocalLength(Activity context, Integer lensFacing) { // Computes the focal length of the camera in pixels based on lens and sensor properties.
private float calculateFocalLengthInPixels() {
// Focal length of the camera in millimeters.
// Note that CameraCharacteristics returns a list of focal lengths and there could be more
// than one focal length available if optical zoom is enabled or there are multiple physical
// cameras in the logical camera referenced here. A theoretically correct of doing this would
// be to use the focal length set explicitly via Camera2 API, as documented in
// https://developer.android.com/reference/android/hardware/camera2/CaptureRequest#LENS_FOCAL_LENGTH.
float focalLengthMm =
cameraCharacteristics.get(CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS)[0];
// Sensor Width of the camera in millimeters.
float sensorWidthMm =
cameraCharacteristics.get(CameraCharacteristics.SENSOR_INFO_PHYSICAL_SIZE).getWidth();
return frameSize.getWidth() * focalLengthMm / sensorWidthMm;
}
@Nullable
private static CameraCharacteristics getCameraCharacteristics(
Activity context, Integer lensFacing) {
CameraManager cameraManager = (CameraManager) context.getSystemService(Context.CAMERA_SERVICE); CameraManager cameraManager = (CameraManager) context.getSystemService(Context.CAMERA_SERVICE);
try { try {
List<String> cameraList = Arrays.asList(cameraManager.getCameraIdList()); List<String> cameraList = Arrays.asList(cameraManager.getCameraIdList());
@ -159,24 +241,12 @@ public class CameraXPreviewHelper extends CameraHelper {
continue; continue;
} }
if (availableLensFacing.equals(lensFacing)) { if (availableLensFacing.equals(lensFacing)) {
cameraCharacteristics = availableCameraCharacteristics; return availableCameraCharacteristics;
break;
} }
} }
// Focal length of the camera in millimeters.
// Note that CameraCharacteristics returns a list of focal lengths and there could be more
// than one focal length available if optical zoom is enabled or there are multiple physical
// cameras in the logical camera referenced here. A theoretically correct of doing this would
// be to use the focal length set explicitly via Camera2 API, as documented in
// https://developer.android.com/reference/android/hardware/camera2/CaptureRequest#LENS_FOCAL_LENGTH.
float focalLengthMm =
cameraCharacteristics.get(CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS)[0];
// Sensor Width of the camera in millimeters.
float sensorWidthMm =
cameraCharacteristics.get(CameraCharacteristics.SENSOR_INFO_PHYSICAL_SIZE).getWidth();
focalLengthPixels = frameSize.getWidth() * focalLengthMm / sensorWidthMm;
} catch (CameraAccessException e) { } catch (CameraAccessException e) {
Log.e(TAG, "Accessing camera ID info got error: " + e); Log.e(TAG, "Accessing camera ID info got error: " + e);
} }
return null;
} }
} }

View File

@ -74,6 +74,15 @@ public class ExternalTextureConverter implements TextureFrameProducer {
thread.setFlipY(flip); thread.setFlipY(flip);
} }
/**
* Sets an offset that can be used to adjust the timestamps on the camera frames, for example to
* conform to a preferred time-base or to account for a known device latency. The offset is added
* to each frame timetamp read by the ExternalTextureConverter.
*/
public void setTimestampOffsetNanos(long offsetInNanos) {
thread.setTimestampOffsetNanos(offsetInNanos);
}
public ExternalTextureConverter(EGLContext parentContext) { public ExternalTextureConverter(EGLContext parentContext) {
this(parentContext, DEFAULT_NUM_BUFFERS); this(parentContext, DEFAULT_NUM_BUFFERS);
} }
@ -148,7 +157,8 @@ public class ExternalTextureConverter implements TextureFrameProducer {
private List<AppTextureFrame> outputFrames = null; private List<AppTextureFrame> outputFrames = null;
private int outputFrameIndex = -1; private int outputFrameIndex = -1;
private ExternalTextureRenderer renderer = null; private ExternalTextureRenderer renderer = null;
private long timestampOffset = 0; private long nextFrameTimestampOffset = 0;
private long timestampOffsetNanos = 0;
private long previousTimestamp = 0; private long previousTimestamp = 0;
private boolean previousTimestampValid = false; private boolean previousTimestampValid = false;
@ -229,6 +239,10 @@ public class ExternalTextureConverter implements TextureFrameProducer {
super.releaseGl(); // This releases the EGL context, so must do it after any GL calls. super.releaseGl(); // This releases the EGL context, so must do it after any GL calls.
} }
public void setTimestampOffsetNanos(long offsetInNanos) {
timestampOffsetNanos = offsetInNanos;
}
protected void renderNext(SurfaceTexture fromTexture) { protected void renderNext(SurfaceTexture fromTexture) {
if (fromTexture != surfaceTexture) { if (fromTexture != surfaceTexture) {
// Although the setSurfaceTexture and renderNext methods are correctly sequentialized on // Although the setSurfaceTexture and renderNext methods are correctly sequentialized on
@ -333,13 +347,15 @@ public class ExternalTextureConverter implements TextureFrameProducer {
renderer.render(surfaceTexture); renderer.render(surfaceTexture);
// Populate frame timestamp with surface texture timestamp after render() as renderer // Populate frame timestamp with surface texture timestamp after render() as renderer
// ensures that surface texture has the up-to-date timestamp. (Also adjust |timestampOffset| // ensures that surface texture has the up-to-date timestamp. (Also adjust
// to ensure that timestamps increase monotonically.) // |nextFrameTimestampOffset| to ensure that timestamps increase monotonically.)
long textureTimestamp = surfaceTexture.getTimestamp() / NANOS_PER_MICRO; long textureTimestamp =
if (previousTimestampValid && textureTimestamp + timestampOffset <= previousTimestamp) { (surfaceTexture.getTimestamp() + timestampOffsetNanos) / NANOS_PER_MICRO;
timestampOffset = previousTimestamp + 1 - textureTimestamp; if (previousTimestampValid
&& textureTimestamp + nextFrameTimestampOffset <= previousTimestamp) {
nextFrameTimestampOffset = previousTimestamp + 1 - textureTimestamp;
} }
outputFrame.setTimestamp(textureTimestamp + timestampOffset); outputFrame.setTimestamp(textureTimestamp + nextFrameTimestampOffset);
previousTimestamp = outputFrame.getTimestamp(); previousTimestamp = outputFrame.getTimestamp();
previousTimestampValid = true; previousTimestampValid = true;
} }

View File

@ -21,6 +21,8 @@ import android.media.MediaRecorder.AudioSource;
import android.os.Build.VERSION; import android.os.Build.VERSION;
import android.os.Build.VERSION_CODES; import android.os.Build.VERSION_CODES;
import android.util.Log; import android.util.Log;
import com.google.common.base.Preconditions;
import java.nio.ByteBuffer;
/** Provides access to audio data from a microphone. */ /** Provides access to audio data from a microphone. */
public class MicrophoneHelper implements AudioDataProducer { public class MicrophoneHelper implements AudioDataProducer {
@ -42,10 +44,10 @@ public class MicrophoneHelper implements AudioDataProducer {
// constant favor faster blocking calls to AudioRecord.read(...). // constant favor faster blocking calls to AudioRecord.read(...).
private static final int MAX_READ_INTERVAL_SEC = 1; private static final int MAX_READ_INTERVAL_SEC = 1;
// This class uses AudioFormat.ENCODING_PCM_16BIT, i.e. 16 bits per single channel sample. // This class uses AudioFormat.ENCODING_PCM_16BIT, i.e. 16 bits per sample.
private static final int BYTES_PER_MONO_SAMPLE = 2; private static final int BYTES_PER_SAMPLE = 2;
private static final long UNINITIALIZED_TIMESTAMP = -1; private static final long UNINITIALIZED_TIMESTAMP = Long.MIN_VALUE;
private static final long NANOS_PER_MICROS = 1000; private static final long NANOS_PER_MICROS = 1000;
private static final long MICROS_PER_SECOND = 1000000; private static final long MICROS_PER_SECOND = 1000000;
@ -54,22 +56,16 @@ public class MicrophoneHelper implements AudioDataProducer {
// Channel configuration of audio source, one of AudioRecord.CHANNEL_IN_MONO or // Channel configuration of audio source, one of AudioRecord.CHANNEL_IN_MONO or
// AudioRecord.CHANNEL_IN_STEREO. // AudioRecord.CHANNEL_IN_STEREO.
private final int channelConfig; private final int channelConfig;
// Bytes per audio frame. A frame is defined as a multi-channel audio sample. Possible values are
// 2 bytes for 1 channel, or 4 bytes for 2 channel audio.
private final int bytesPerFrame;
// Data storage allocated to record audio samples in a single function call to AudioRecord.read(). // Data storage allocated to record audio samples in a single function call to AudioRecord.read().
private final int bufferSize; private final int bufferSize;
// Bytes used per sample, accounts for number of channels of audio source. Possible values are 2
// bytes for a 1-channel sample and 4 bytes for a 2-channel sample.
private final int bytesPerSample;
private byte[] audioData;
// Timestamp provided by the AudioTimestamp object.
private AudioTimestamp audioTimestamp;
// Initial timestamp base. Can be set by the client so that all timestamps calculated using the // Initial timestamp base. Can be set by the client so that all timestamps calculated using the
// number of samples read per AudioRecord.read() function call start from this timestamp. // number of samples read per AudioRecord.read() function call start from this timestamp. If it
private long initialTimestamp = UNINITIALIZED_TIMESTAMP; // is not set by the client, then every startMicrophone(...) call marks a value for it.
// The total number of samples read from multiple calls to AudioRecord.read(). This is reset to private long initialTimestampMicros = UNINITIALIZED_TIMESTAMP;
// zero for every startMicrophone() call.
private long totalNumSamplesRead;
// AudioRecord is used to setup a way to record data from the audio source. See // AudioRecord is used to setup a way to record data from the audio source. See
// https://developer.android.com/reference/android/media/AudioRecord.htm for details. // https://developer.android.com/reference/android/media/AudioRecord.htm for details.
@ -99,9 +95,9 @@ public class MicrophoneHelper implements AudioDataProducer {
this.channelConfig = channelConfig; this.channelConfig = channelConfig;
// Number of channels of audio source, depending on channelConfig. // Number of channels of audio source, depending on channelConfig.
final int channelCount = channelConfig == AudioFormat.CHANNEL_IN_STEREO ? 2 : 1; final int numChannels = channelConfig == AudioFormat.CHANNEL_IN_STEREO ? 2 : 1;
bytesPerSample = BYTES_PER_MONO_SAMPLE * channelCount; bytesPerFrame = BYTES_PER_SAMPLE * numChannels;
// The minimum buffer size required by AudioRecord. // The minimum buffer size required by AudioRecord.
final int minBufferSize = final int minBufferSize =
@ -115,14 +111,13 @@ public class MicrophoneHelper implements AudioDataProducer {
// from the audio stream in each AudioRecord.read(...) call. // from the audio stream in each AudioRecord.read(...) call.
if (minBufferSize == AudioRecord.ERROR || minBufferSize == AudioRecord.ERROR_BAD_VALUE) { if (minBufferSize == AudioRecord.ERROR || minBufferSize == AudioRecord.ERROR_BAD_VALUE) {
Log.e(TAG, "AudioRecord minBufferSize unavailable."); Log.e(TAG, "AudioRecord minBufferSize unavailable.");
bufferSize = sampleRateInHz * MAX_READ_INTERVAL_SEC * bytesPerSample * BUFFER_SIZE_MULTIPLIER; bufferSize = sampleRateInHz * MAX_READ_INTERVAL_SEC * bytesPerFrame * BUFFER_SIZE_MULTIPLIER;
} else { } else {
bufferSize = minBufferSize * BUFFER_SIZE_MULTIPLIER; bufferSize = minBufferSize * BUFFER_SIZE_MULTIPLIER;
} }
} }
private void setupAudioRecord() { private void setupAudioRecord() {
audioData = new byte[bufferSize];
Log.d(TAG, "AudioRecord(" + sampleRateInHz + ", " + bufferSize + ")"); Log.d(TAG, "AudioRecord(" + sampleRateInHz + ", " + bufferSize + ")");
audioFormat = audioFormat =
@ -148,24 +143,18 @@ public class MicrophoneHelper implements AudioDataProducer {
() -> { () -> {
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_AUDIO); android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_AUDIO);
// Initial timestamp in case the AudioRecord.getTimestamp() function is unavailable. // The total number of frames read from multiple calls to AudioRecord.read() in this
long startTimestamp = // recording thread.
initialTimestamp != UNINITIALIZED_TIMESTAMP int totalNumFramesRead = 0;
? initialTimestamp
: System.nanoTime() / NANOS_PER_MICROS;
long sampleBasedTimestamp;
while (recording) { while (recording) {
if (audioRecord == null) { if (audioRecord == null) {
break; break;
} }
final int numBytesRead = // TODO: Fix audio data cloning.
audioRecord.read(audioData, /*offsetInBytes=*/ 0, /*sizeInBytes=*/ bufferSize); ByteBuffer audioData = ByteBuffer.allocateDirect(bufferSize);
// If AudioRecord.getTimestamp() is unavailable, calculate the timestamp using the final int numBytesRead = audioRecord.read(audioData, /*sizeInBytes=*/ bufferSize);
// number of samples read in the call to AudioRecord.read(). // Get the timestamp of the first audio frame in the latest read call.
long sampleBasedFallbackTimestamp = long timestampMicros = getTimestampMicros(totalNumFramesRead);
startTimestamp + totalNumSamplesRead * MICROS_PER_SECOND / sampleRateInHz;
sampleBasedTimestamp =
getTimestamp(/*fallbackTimestamp=*/ sampleBasedFallbackTimestamp);
if (numBytesRead <= 0) { if (numBytesRead <= 0) {
if (numBytesRead == AudioRecord.ERROR_INVALID_OPERATION) { if (numBytesRead == AudioRecord.ERROR_INVALID_OPERATION) {
Log.e(TAG, "ERROR_INVALID_OPERATION"); Log.e(TAG, "ERROR_INVALID_OPERATION");
@ -179,37 +168,65 @@ public class MicrophoneHelper implements AudioDataProducer {
// stopMicrophone() wasn't called. If the consumer called stopMicrophone(), discard // stopMicrophone() wasn't called. If the consumer called stopMicrophone(), discard
// the data read in the latest AudioRecord.read(...) function call. // the data read in the latest AudioRecord.read(...) function call.
if (recording && consumer != null) { if (recording && consumer != null) {
// TODO: Refactor audioData buffer cloning. consumer.onNewAudioData(audioData, timestampMicros, audioFormat);
consumer.onNewAudioData(audioData.clone(), sampleBasedTimestamp, audioFormat);
} }
// TODO: Replace byte[] with short[] audioData.
// It is expected that audioRecord.read() will read full samples and therefore // It is expected that audioRecord.read() will read full samples and therefore
// numBytesRead is expected to be a multiple of bytesPerSample. // numBytesRead is expected to be a multiple of bytesPerFrame.
int numSamplesRead = numBytesRead / bytesPerSample; int numFramesRead = numBytesRead / bytesPerFrame;
totalNumSamplesRead += numSamplesRead; totalNumFramesRead += numFramesRead;
} }
}); },
"microphoneHelperRecordingThread");
} }
// If AudioRecord.getTimestamp() is available and returns without error, this function returns the // If AudioRecord.getTimestamp() is available and returns without error, this function returns the
// timestamp using AudioRecord.getTimestamp(). If the function is unavailable, it returns a // timestamp using AudioRecord.getTimestamp(). If the function is unavailable, it returns a
// fallbackTimestamp provided as an argument to this method. // fallback timestamp calculated using number of samples read so far.
private long getTimestamp(long fallbackTimestamp) { // Use numFramesRead to be the frame count before the latest AudioRecord.read(...) call to get
// the timestamp of the first audio frame in the latest AudioRecord.read(...) call.
private long getTimestampMicros(long numFramesRead) {
AudioTimestamp audioTimestamp = getAudioRecordTimestamp();
if (audioTimestamp == null) {
if (numFramesRead == 0) {
initialTimestampMicros = markInitialTimestamp();
}
// If AudioRecord.getTimestamp() is unavailable, calculate the timestamp using the
// number of frames read in the call to AudioRecord.read().
return initialTimestampMicros + numFramesRead * getMicrosPerSample();
}
// If audioTimestamp.framePosition is ahead of numFramesRead so far, then the offset is
// negative.
long frameOffset = numFramesRead - audioTimestamp.framePosition;
long audioTsMicros = audioTimestamp.nanoTime / NANOS_PER_MICROS;
return audioTsMicros + frameOffset * getMicrosPerSample();
}
private long markInitialTimestamp() {
return initialTimestampMicros != UNINITIALIZED_TIMESTAMP
? initialTimestampMicros
: System.nanoTime() / NANOS_PER_MICROS;
}
private long getMicrosPerSample() {
return MICROS_PER_SECOND / sampleRateInHz;
}
private AudioTimestamp getAudioRecordTimestamp() {
Preconditions.checkNotNull(audioRecord);
// AudioRecord.getTimestamp is only available at API Level 24 and above. // AudioRecord.getTimestamp is only available at API Level 24 and above.
// https://developer.android.com/reference/android/media/AudioRecord.html#getTimestamp(android.media.AudioTimestamp,%20int). // https://developer.android.com/reference/android/media/AudioRecord.html#getTimestamp(android.media.AudioTimestamp,%20int).
if (VERSION.SDK_INT >= VERSION_CODES.N) { if (VERSION.SDK_INT >= VERSION_CODES.N) {
if (audioTimestamp == null) { AudioTimestamp audioTimestamp = new AudioTimestamp();
audioTimestamp = new AudioTimestamp();
}
int status = audioRecord.getTimestamp(audioTimestamp, AudioTimestamp.TIMEBASE_MONOTONIC); int status = audioRecord.getTimestamp(audioTimestamp, AudioTimestamp.TIMEBASE_MONOTONIC);
if (status == AudioRecord.SUCCESS) { if (status == AudioRecord.SUCCESS) {
return audioTimestamp.nanoTime / NANOS_PER_MICROS; return audioTimestamp;
} else { } else {
Log.e(TAG, "audioRecord.getTimestamp failed with status: " + status); Log.e(TAG, "audioRecord.getTimestamp failed with status: " + status);
} }
} }
return fallbackTimestamp; return null;
} }
// Returns the buffer size read by this class per AudioRecord.read() call. // Returns the buffer size read by this class per AudioRecord.read() call.
@ -221,8 +238,8 @@ public class MicrophoneHelper implements AudioDataProducer {
* Overrides the use of system time as the source of timestamps for audio packets. Not * Overrides the use of system time as the source of timestamps for audio packets. Not
* recommended. Provided to maintain compatibility with existing usage by CameraRecorder. * recommended. Provided to maintain compatibility with existing usage by CameraRecorder.
*/ */
public void setInitialTimestamp(long initialTimestamp) { public void setInitialTimestampMicros(long initialTimestampMicros) {
this.initialTimestamp = initialTimestamp; this.initialTimestampMicros = initialTimestampMicros;
} }
// This method sets up a new AudioRecord object for reading audio data from the microphone. It // This method sets up a new AudioRecord object for reading audio data from the microphone. It
@ -241,7 +258,6 @@ public class MicrophoneHelper implements AudioDataProducer {
} }
recording = true; recording = true;
totalNumSamplesRead = 0;
recordingThread.start(); recordingThread.start();
Log.d(TAG, "AudioRecord is recording audio."); Log.d(TAG, "AudioRecord is recording audio.");
@ -256,6 +272,7 @@ public class MicrophoneHelper implements AudioDataProducer {
// Stops the AudioRecord object from reading data from the microphone. // Stops the AudioRecord object from reading data from the microphone.
public void stopMicrophoneWithoutCleanup() { public void stopMicrophoneWithoutCleanup() {
Preconditions.checkNotNull(audioRecord);
if (!recording) { if (!recording) {
return; return;
} }
@ -277,6 +294,7 @@ public class MicrophoneHelper implements AudioDataProducer {
// Releases the AudioRecord object when there is no ongoing recording. // Releases the AudioRecord object when there is no ongoing recording.
public void cleanup() { public void cleanup() {
Preconditions.checkNotNull(audioRecord);
if (recording) { if (recording) {
return; return;
} }

View File

@ -22,7 +22,7 @@
namespace { namespace {
ABSL_CONST_INIT absl::Mutex g_jvm_mutex(absl::kConstInit); ABSL_CONST_INIT absl::Mutex g_jvm_mutex(absl::kConstInit);
JavaVM* g_jvm GUARDED_BY(g_jvm_mutex); JavaVM* g_jvm ABSL_GUARDED_BY(g_jvm_mutex);
class JvmThread { class JvmThread {
public: public:

View File

@ -34,6 +34,9 @@
/// Whether to rotate video buffers with device rotation. /// Whether to rotate video buffers with device rotation.
@property(nonatomic) BOOL autoRotateBuffers; @property(nonatomic) BOOL autoRotateBuffers;
/// The camera intrinsic matrix.
@property(nonatomic, readonly) matrix_float3x3 cameraIntrinsicMatrix;
/// The capture session. /// The capture session.
@property(nonatomic, readonly) AVCaptureSession *session; @property(nonatomic, readonly) AVCaptureSession *session;

View File

@ -25,7 +25,7 @@
AVCaptureDeviceInput* _videoDeviceInput; AVCaptureDeviceInput* _videoDeviceInput;
AVCaptureVideoDataOutput* _videoDataOutput; AVCaptureVideoDataOutput* _videoDataOutput;
AVCaptureDepthDataOutput* _depthDataOutput; AVCaptureDepthDataOutput* _depthDataOutput;
AVCaptureDevice *_currentDevice; AVCaptureDevice* _currentDevice;
matrix_float3x3 _cameraIntrinsicMatrix; matrix_float3x3 _cameraIntrinsicMatrix;
OSType _pixelFormatType; OSType _pixelFormatType;
@ -50,8 +50,7 @@
return self; return self;
} }
- (void)setDelegate:(id<MPPInputSourceDelegate>)delegate - (void)setDelegate:(id<MPPInputSourceDelegate>)delegate queue:(dispatch_queue_t)queue {
queue:(dispatch_queue_t)queue {
[super setDelegate:delegate queue:queue]; [super setDelegate:delegate queue:queue];
// Note that _depthDataOutput and _videoDataOutput may not have been created yet. In that case, // Note that _depthDataOutput and _videoDataOutput may not have been created yet. In that case,
// this message to nil is ignored, and the delegate will be set later by setupCamera. // this message to nil is ignored, and the delegate will be set later by setupCamera.
@ -157,9 +156,7 @@
- (void)setPixelFormatType:(OSType)pixelFormatType { - (void)setPixelFormatType:(OSType)pixelFormatType {
_pixelFormatType = pixelFormatType; _pixelFormatType = pixelFormatType;
if ([self isRunning]) { if ([self isRunning]) {
_videoDataOutput.videoSettings = @{ _videoDataOutput.videoSettings = @{(id)kCVPixelBufferPixelFormatTypeKey : @(_pixelFormatType)};
(id)kCVPixelBufferPixelFormatTypeKey : @(_pixelFormatType)
};
} }
} }
@ -181,10 +178,10 @@
} }
AVCaptureDeviceDiscoverySession* deviceDiscoverySession = [AVCaptureDeviceDiscoverySession AVCaptureDeviceDiscoverySession* deviceDiscoverySession = [AVCaptureDeviceDiscoverySession
discoverySessionWithDeviceTypes:@[ discoverySessionWithDeviceTypes:@[ _cameraPosition == AVCaptureDevicePositionFront &&
_cameraPosition == AVCaptureDevicePositionFront && _useDepth ? _useDepth
AVCaptureDeviceTypeBuiltInTrueDepthCamera : ? AVCaptureDeviceTypeBuiltInTrueDepthCamera
AVCaptureDeviceTypeBuiltInWideAngleCamera] : AVCaptureDeviceTypeBuiltInWideAngleCamera ]
mediaType:AVMediaTypeVideo mediaType:AVMediaTypeVideo
position:_cameraPosition]; position:_cameraPosition];
AVCaptureDevice* videoDevice = AVCaptureDevice* videoDevice =
@ -211,9 +208,7 @@
// kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, // kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
// kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, // kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,
// kCVPixelFormatType_32BGRA. // kCVPixelFormatType_32BGRA.
_videoDataOutput.videoSettings = @{ _videoDataOutput.videoSettings = @{(id)kCVPixelBufferPixelFormatTypeKey : @(_pixelFormatType)};
(id)kCVPixelBufferPixelFormatTypeKey : @(_pixelFormatType)
};
} }
// Remove Old Depth Depth // Remove Old Depth Depth
@ -229,14 +224,13 @@
[_session addOutput:_depthDataOutput]; [_session addOutput:_depthDataOutput];
AVCaptureConnection* connection = AVCaptureConnection* connection =
[_depthDataOutput connectionWithMediaType:AVMediaTypeDepthData]; [_depthDataOutput connectionWithMediaType:AVMediaTypeDepthData];
// Set this when we have a handler. // Set this when we have a handler.
if (self.delegateQueue) { if (self.delegateQueue) {
[_depthDataOutput setDelegate:self callbackQueue:self.delegateQueue]; [_depthDataOutput setDelegate:self callbackQueue:self.delegateQueue];
} }
} } else
else
_depthDataOutput = nil; _depthDataOutput = nil;
} }
@ -269,15 +263,8 @@
// Receives frames from the camera. Invoked on self.frameHandlerQueue. // Receives frames from the camera. Invoked on self.frameHandlerQueue.
- (void)captureOutput:(AVCaptureOutput*)captureOutput - (void)captureOutput:(AVCaptureOutput*)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection*)connection { fromConnection:(AVCaptureConnection*)connection {
CVPixelBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CMTime timestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
if ([self.delegate respondsToSelector:@selector(processVideoFrame:timestamp:fromSource:)]) {
[self.delegate processVideoFrame:imageBuffer timestamp:timestamp fromSource:self];
} else if ([self.delegate respondsToSelector:@selector(processVideoFrame:fromSource:)]) {
[self.delegate processVideoFrame:imageBuffer fromSource:self];
}
if (!_didReadCameraIntrinsicMatrix) { if (!_didReadCameraIntrinsicMatrix) {
// Get camera intrinsic matrix. // Get camera intrinsic matrix.
CFTypeRef cameraIntrinsicData = CFTypeRef cameraIntrinsicData =
@ -291,15 +278,22 @@ didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
} }
_didReadCameraIntrinsicMatrix = YES; _didReadCameraIntrinsicMatrix = YES;
} }
CVPixelBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CMTime timestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
if ([self.delegate respondsToSelector:@selector(processVideoFrame:timestamp:fromSource:)]) {
[self.delegate processVideoFrame:imageBuffer timestamp:timestamp fromSource:self];
} else if ([self.delegate respondsToSelector:@selector(processVideoFrame:fromSource:)]) {
[self.delegate processVideoFrame:imageBuffer fromSource:self];
}
} }
#pragma mark - AVCaptureDepthDataOutputDelegate methods #pragma mark - AVCaptureDepthDataOutputDelegate methods
// Receives depth frames from the camera. Invoked on self.frameHandlerQueue. // Receives depth frames from the camera. Invoked on self.frameHandlerQueue.
- (void)depthDataOutput:(AVCaptureDepthDataOutput *)output - (void)depthDataOutput:(AVCaptureDepthDataOutput*)output
didOutputDepthData:(AVDepthData *)depthData didOutputDepthData:(AVDepthData*)depthData
timestamp:(CMTime)timestamp timestamp:(CMTime)timestamp
connection:(AVCaptureConnection *)connection { connection:(AVCaptureConnection*)connection {
if (depthData.depthDataType != kCVPixelFormatType_DepthFloat32) { if (depthData.depthDataType != kCVPixelFormatType_DepthFloat32) {
depthData = [depthData depthDataByConvertingToDepthDataType:kCVPixelFormatType_DepthFloat32]; depthData = [depthData depthDataByConvertingToDepthDataType:kCVPixelFormatType_DepthFloat32];
} }

View File

@ -312,6 +312,10 @@ CVPixelBufferRef CreateCVPixelBufferForImageFramePacket(
pixel_format = kCVPixelFormatType_OneComponent8; pixel_format = kCVPixelFormatType_OneComponent8;
break; break;
case mediapipe::ImageFormat::VEC32F1:
pixel_format = kCVPixelFormatType_OneComponent32Float;
break;
default: default:
return ::mediapipe::UnknownErrorBuilder(MEDIAPIPE_LOC) return ::mediapipe::UnknownErrorBuilder(MEDIAPIPE_LOC)
<< "unsupported ImageFrame format: " << image_format; << "unsupported ImageFrame format: " << image_format;

View File

@ -193,17 +193,17 @@ class BoxTracker {
} }
// Returns true if any tracking is ongoing for the specified id. // Returns true if any tracking is ongoing for the specified id.
bool IsTrackingOngoingForId(int id) LOCKS_EXCLUDED(status_mutex_); bool IsTrackingOngoingForId(int id) ABSL_LOCKS_EXCLUDED(status_mutex_);
// Returns true if any tracking is ongoing. // Returns true if any tracking is ongoing.
bool IsTrackingOngoing() LOCKS_EXCLUDED(status_mutex_); bool IsTrackingOngoing() ABSL_LOCKS_EXCLUDED(status_mutex_);
// Cancels all ongoing tracks. To avoid race conditions all NewBoxTrack's in // Cancels all ongoing tracks. To avoid race conditions all NewBoxTrack's in
// flight will also be canceled. Future NewBoxTrack's will be canceled. // flight will also be canceled. Future NewBoxTrack's will be canceled.
// NOTE: To resume execution, you have to call ResumeTracking() before // NOTE: To resume execution, you have to call ResumeTracking() before
// issuing more NewBoxTrack calls. // issuing more NewBoxTrack calls.
void CancelAllOngoingTracks() LOCKS_EXCLUDED(status_mutex_); void CancelAllOngoingTracks() ABSL_LOCKS_EXCLUDED(status_mutex_);
void ResumeTracking() LOCKS_EXCLUDED(status_mutex_); void ResumeTracking() ABSL_LOCKS_EXCLUDED(status_mutex_);
// Waits for all ongoing tracks to complete. // Waits for all ongoing tracks to complete.
// Optionally accepts a timeout in microseconds (== 0 for infinite wait). // Optionally accepts a timeout in microseconds (== 0 for infinite wait).
@ -212,7 +212,7 @@ class BoxTracker {
// be called before destructing the BoxTracker object or dangeling running // be called before destructing the BoxTracker object or dangeling running
// threads might try to access invalid data. // threads might try to access invalid data.
bool WaitForAllOngoingTracks(int timeout_us = 0) bool WaitForAllOngoingTracks(int timeout_us = 0)
LOCKS_EXCLUDED(status_mutex_); ABSL_LOCKS_EXCLUDED(status_mutex_);
// Debug function to obtain raw TrackingData closest to the specified // Debug function to obtain raw TrackingData closest to the specified
// timestamp. This call will read from disk on every invocation so it is // timestamp. This call will read from disk on every invocation so it is
@ -247,7 +247,7 @@ class BoxTracker {
// Waits with timeout for chunkfile to become available. Returns true on // Waits with timeout for chunkfile to become available. Returns true on
// success, false if waited till timeout or when canceled. // success, false if waited till timeout or when canceled.
bool WaitForChunkFile(int id, int checkpoint, const std::string& chunk_file) bool WaitForChunkFile(int id, int checkpoint, const std::string& chunk_file)
LOCKS_EXCLUDED(status_mutex_); ABSL_LOCKS_EXCLUDED(status_mutex_);
// Determines closest index in passed TrackingDataChunk // Determines closest index in passed TrackingDataChunk
int ClosestFrameIndex(int64 msec, const TrackingDataChunk& chunk) const; int ClosestFrameIndex(int64 msec, const TrackingDataChunk& chunk) const;
@ -305,26 +305,27 @@ class BoxTracker {
// Ids are scheduled exclusively, run this method to acquire lock. // Ids are scheduled exclusively, run this method to acquire lock.
// Returns false if id could not be scheduled (e.g. id got canceled during // Returns false if id could not be scheduled (e.g. id got canceled during
// waiting). // waiting).
bool WaitToScheduleId(int id) LOCKS_EXCLUDED(status_mutex_); bool WaitToScheduleId(int id) ABSL_LOCKS_EXCLUDED(status_mutex_);
// Signals end of scheduling phase. Requires status mutex to be held. // Signals end of scheduling phase. Requires status mutex to be held.
void DoneSchedulingId(int id) EXCLUSIVE_LOCKS_REQUIRED(status_mutex_); void DoneSchedulingId(int id) ABSL_EXCLUSIVE_LOCKS_REQUIRED(status_mutex_);
// Removes all checkpoints within vicinity of new checkpoint. // Removes all checkpoints within vicinity of new checkpoint.
void RemoveCloseCheckpoints(int id, int checkpoint) void RemoveCloseCheckpoints(int id, int checkpoint)
EXCLUSIVE_LOCKS_REQUIRED(status_mutex_); ABSL_EXCLUSIVE_LOCKS_REQUIRED(status_mutex_);
// Removes specific checkpoint. // Removes specific checkpoint.
void ClearCheckpoint(int id, int checkpoint) void ClearCheckpoint(int id, int checkpoint)
EXCLUSIVE_LOCKS_REQUIRED(status_mutex_); ABSL_EXCLUSIVE_LOCKS_REQUIRED(status_mutex_);
// Terminates tracking for specific id and checkpoint. // Terminates tracking for specific id and checkpoint.
void CancelTracking(int id, int checkpoint) void CancelTracking(int id, int checkpoint)
EXCLUSIVE_LOCKS_REQUIRED(status_mutex_); ABSL_EXCLUSIVE_LOCKS_REQUIRED(status_mutex_);
// Implementation function for IsTrackingOngoing assuming mutex is already // Implementation function for IsTrackingOngoing assuming mutex is already
// held. // held.
bool IsTrackingOngoingMutexHeld() EXCLUSIVE_LOCKS_REQUIRED(status_mutex_); bool IsTrackingOngoingMutexHeld()
ABSL_EXCLUSIVE_LOCKS_REQUIRED(status_mutex_);
// Captures tracking status for each checkpoint // Captures tracking status for each checkpoint
struct TrackStatus { struct TrackStatus {
@ -337,21 +338,21 @@ class BoxTracker {
private: private:
// Stores computed tracking paths_ for all boxes. // Stores computed tracking paths_ for all boxes.
std::unordered_map<int, Path> paths_ GUARDED_BY(path_mutex_); std::unordered_map<int, Path> paths_ ABSL_GUARDED_BY(path_mutex_);
absl::Mutex path_mutex_; absl::Mutex path_mutex_;
// For each id and each checkpoint stores current tracking status. // For each id and each checkpoint stores current tracking status.
std::unordered_map<int, std::map<int, TrackStatus>> track_status_ std::unordered_map<int, std::map<int, TrackStatus>> track_status_
GUARDED_BY(status_mutex_); ABSL_GUARDED_BY(status_mutex_);
// Keeps track which ids are currently processing in NewBoxTrack. // Keeps track which ids are currently processing in NewBoxTrack.
std::unordered_map<int, bool> new_box_track_ GUARDED_BY(status_mutex_); std::unordered_map<int, bool> new_box_track_ ABSL_GUARDED_BY(status_mutex_);
absl::Mutex status_mutex_; absl::Mutex status_mutex_;
bool canceling_ GUARDED_BY(status_mutex_) = false; bool canceling_ ABSL_GUARDED_BY(status_mutex_) = false;
// Use to signal changes to status_condvar_; // Use to signal changes to status_condvar_;
absl::CondVar status_condvar_ GUARDED_BY(status_mutex_); absl::CondVar status_condvar_ ABSL_GUARDED_BY(status_mutex_);
BoxTrackerOptions options_; BoxTrackerOptions options_;

View File

@ -332,7 +332,7 @@ void ParallelFor(size_t start, size_t end, size_t grain_size,
struct { struct {
absl::Mutex mutex; absl::Mutex mutex;
absl::CondVar completed; absl::CondVar completed;
int iterations_remain GUARDED_BY(mutex); int iterations_remain ABSL_GUARDED_BY(mutex);
} loop; } loop;
{ {
absl::MutexLock lock(&loop.mutex); absl::MutexLock lock(&loop.mutex);

11
third_party/easyexif.BUILD vendored Normal file
View File

@ -0,0 +1,11 @@
package(default_visibility = ["//visibility:public"])
licenses(["notice"]) # BSD License.
exports_files(["LICENSE"])
cc_library(
name = "easyexif",
srcs = ["exif.cpp"],
hdrs = ["exif.h"],
)