It should be safe. On the one hand, the documentation of tf.GradientTape.watch says:
Ensures that tensor is being traced by this tape.
"Ensures" seems to imply that it will make sure it is traced in case it is not. In fact, the documentation does not give any indication that using it twice over the same object should be a problem (although it wouldn't hurt if they made that explicit).
But in any case, we can dig into the source code to check. In the end, calling watch on a variable (the answer ends up the same if it's not a variable but the path diverges slightly) comes down to the WatchVariable method of a GradientTape class in C++:
void WatchVariable(PyObject* v) {
tensorflow::Safe_PyObjectPtr handle(PyObject_GetAttrString(v, "handle"));
if (handle == nullptr) {
return;
}
tensorflow::int64 id = FastTensorId(handle.get());
if (!PyErr_Occurred()) {
this->Watch(id);
}
tensorflow::mutex_lock l(watched_variables_mu_);
auto insert_result = watched_variables_.emplace(id, v);
if (insert_result.second) {
// Only increment the reference count if we aren't already watching this
// variable.
Py_INCREF(v);
}
}
The second half of the method shows that the watched variable is added to watched_variables_, which is a std::set, so adding again something will not do anything. This is actually checked later to make sure Python reference counting is correct. The first half basically calls Watch:
template <typename Gradient, typename BackwardFunction, typename TapeTensor>
void GradientTape<Gradient, BackwardFunction, TapeTensor>::Watch(
int64 tensor_id) {
tensor_tape_.emplace(tensor_id, -1);
}
tensor_tape_ is a map (specifically a tensorflow::gtl:FlatMap, pretty much the same as a standard C++ map), so if tensor_id is already there this will have no effect.
So, even though it is not explicitly stated, everything suggests there should be no issues with it.