My approach:
find . -depth -name "* *" -execdir bash -c 'pwd; for f in "$@"; do mv -nv "$f" "${f// /_}"; done' dummy {} +
Multi-line version for readability:
find . -depth -name "* *" -execdir \
bash -c '
pwd
for f in "$@"; do
mv -nv "$f" "${f// /_}"
done
' dummy {} +
Explanation:
find . -name "* *" finds objects that need to be renamed. Note find is very flexible with its tests, so if you want (e.g.) to rename directories only, start with find . -depth -type d -name "* *".
-execdir executes the given process (bash) in a directory where the object is, so any path passed by {} is always like ./bar, not ./foo/bar. This means we don't need to care about the whole path. The downside is mv -v won't show you the path, so I added pwd just for information (you can omit it if you want).
bash lets us use the "${baz// /_}" syntax.
-depth ensures the following won't happen: find renames a directory (if applicable) and then tries to process its content by its old path.
{} + is able to feed bash with multiple objects (contrary to {} \; syntax). We iterate over them with for f in "$@". The point is not to run a separate bash process for every object since creating a new process is costly. I think we cannot easily avoid running separate mv-s; still, reducing the number of bash invocations seems a good optimization (pwd is a builtin in bash and doesn't cost us a process). However -execdir ... {} + won't pass files from different directories together. By using -exec ... {} + instead of -execdir ... {} + we may further reduce the number of processes but then we need to care about the paths, not just filenames (compare this other answer, it seems to do a decent job but while read slows it down). This is a matter of speed versus (relative) simplicity. My solution with -exec is down below.
dummy just before {} becomes $0 inside our bash. We need this dummy argument because "$@" is equivalent to "$1" "$2" ... (not "$0" "$1" ...). This way everything passed by {} is available later as "$@".
More complex, slightly optimized version (various ${...} tricks taken from another answer):
find . -depth -name "* *" -exec \
bash -c '
for f in "$@"; do
n="${f##*/}"
mv -nv "$f" "${f%/*}/${n// /_}"
done
' dummy {} +
Another (experimental!) approach involves vidir. The trick is vidir uses $EDITOR which may not be an interactive editor:
find . -name "* *" | EDITOR='sed -i s/\d32/_/g' vidir -
Caveats:
- This will fail for file/directory names with special characters (e.g. newlines).
- We can't use
s/ /_/g directly, \d32 is a workaround.
- Because of how
vidir works, the approach would get tricky if you would like to replace a digit or a tab.
- Here
vidir works with paths, not only filenames (base names), thus renaming files only (i.e. not directories) may be hard.
Nevertheless if you know what you're doing then this may do the job even faster. I don't recommend such (ab)use of vidir in general case though. I included it in my answer because I found this experimental approach interesting.