This is almost 5 years old and I'm sure you've either solved the problem or moved on. I stumbled on this through a link on an unrelated ZFS issue on github.
But I noticed that no one actually answered both of your implied questions. (The accepted answer actually does, as an aside, suggest a partial solution to both - duperemove -dr - but that tool and command doesn't and can't solve it in the way you asked or implied. I.e, at the file level. It's a block/extent-level deduper. Which may be close enough.)
So as I'm sure you've figured out by now, there are necessarily two problems you are trying to solve. In reverse order:
Dedup identical photos in a way that aren't permanently linked together for life. This is what you explicitly asked about.
Determine which photos are actually identical in the first place. The same photos often have different filenames. So relying on filename - or even time + size - isn't necessarily enough.
The problem of identifying identical files, is a pretty well solved problem by now. The world has pretty much agreed that we only want to compare file contents - and not metadata like filename, directory, extended attributes, security ACLS, or even necessarily time/datestamps.
So (for the benefit of others stumbling on this), what most tools do is: first, make an internal list of files with identical sizes, above a minimum threshold. (There's no space-saving to be had, deduping files below a certain size.) Then it computes a binary crypographic hash of just the file contents, for each same-size file. (For the paranoid, many tools can also do full content binary compares, but optionally cache just the checksums for subsequent runs.)
I had to solve essentially the same problem, around the same time.
By now there are many tools on github that do both, in "offline" batch-mode for CoW filesystems: find duplicates, cache results for subsequent runs, and performs an ioctl() with FICLONE (same as cp --reflink), so that the identical files share the same blocks or extents, but if one is edited, they will diverge again.
(Unlike hardlinks, which can be unintentionally permanently destructive, sometimes years later, and in ways you may not even be aware of until even more years later. Long after backups and snapshots have rotated out. This is why I just never, ever use hardlinks for user-level files - and only in exceptionally narrow, highly technical use-cases. Otherwise it almost never makes sense for regular user files.)
Btrfs was the first to support reflink copies over 15 years ago in development - which is exactly what you want.
cp --reflink became a part of coreutils the next year. Some GUI file managers even detect if they are copy/pasting files within the same CoW filesystem boundary, and will essentially cp --reflink=always if so. It's fun to see TBs of data copied instantly that way.
XFS added support about six years later.
BcacheFS, which someone mentioned five years ago, is still in development (going on ten years now), and the rather prickly main developer Kent Overstreet has been promising near production-readiness for about that long. I've been predicting it will be booted from the kernel development tree for years now, due to near-Muskian-level over-promising and under-delivering.
But in spite of recent head-butting with Torvalds himself, just the passing of more time and even any ongoing progress, makes my prediction less likely. (And I do hope I'm wrong. Regardless of personality issues and big talk, it's a promising FS.)
Meanwhile through all that drama, OpenZFS managed to pull off what everyone said was impossible: Getting cp --reflink support into ZFS. Working and stable.
I don't know what the best tool is now to identify and dedup duplicate files - at the file level - nowadays. I wrote my own a while back, basically an elaborate script wrapper around rmlint.
Duperemove and Bees dedupe at the block or extent level, not the file level. Which in the end may not make a practical difference in terms of space-saving or the ability for files to later diverge. But to me personally when scripting a custom solution, the distinction is at least semantically important.
As far as solutions I'm aware of that work at the file level: check out rdfind and jdupes.
rmlint deserves an honorable mention, but requires a lot of additional work - because it's a general-purpose tool that does much more than just dedup. (There is a feature request to add a flag for a single-run "find-dupes-and-also-dedupe-them", for CoW filesystems, that seems to have some developer interest but we'll see.)
It's also a surprisingly straightforward problem to tackle yourself with basic scripting and coreutils, that can be easily and fairly safely vibe-coded with a chatbot - using find, sha256sum, sort, and uniq, for example. The only real mildly sticking point I encountered myself, was the solvable problem of preemptive file-locking for additional safety. And for increased performance over multiple passes, you can script the use of a local cache via sqlite3 (to store checksums and the metadata necessary to know if you need to regenerate a new checksum), and also for better persistence across moves and renames, store hashes+metadata on the files themselves with setfattr (as rmlint optionally does). Though if all this is done in, say, bash performance isn't going to be amazing.
But since there are at least a couple of tools that do all that already, in optimized machine-code form, and surely a half dozen or more others on github, there's arguably no real need to roll your own in script - unless you just want more control over and visibility into the process.