I'm trying to understand how a blind detection (detection without cover work) works, by applying linear correlation. This is my understand so far:
Embedding (one-bit):
- we generate a reference pattern
w_rby using watermarking key W_m:we multiplyw_rwith an strength factoraand take the negative values if we want to embedd a zero bit.- Then:
C = C_0 + W_m + N,whereNis noise
Blind detection (found in literature):
- We need to calculate the linear correlation between
w_randC, to detect the appearance ofw_rinC. Linear correlation in genereal is the normalizez scalar product =1/(j*i) *C*w_r Cconsists ofC_0*w_r + W_m*w_r + w_*r*N. It is said that, because the left and the right term is probably small, butW_m*w_rhas large magnitude, thereforeLC(C,w_r) = +-a * |w_r|^2/(ji)
This makes no sense to me. Why should we only consider +-a * |w_r|^2/(ji) for detecting watermarks, without using C ?. This term LC(C,w_r) = +-a * |w_r|^2/(ji) is independent from C?
Or does this only explain why we can say that low linear correlation corresponds to zero-bit and high value to one-bit and we just compute LC(C,w_r) like we usually do by using the scalar product?
Thanks!