0

I need some help with bash processing of a text file. In my effort to clean up decades worth of windows use I have written a simple command to find all the instances of a specific file - specifically all the desktop.ini files. The results of this are perfect as far as I can tell.

find . -name "desktop.ini" > ~/desktop-ini.txt gives me 1038 rows of:

./Billy Joel/Greatest Hits, Vols. 1 & 2 (1973-1985), Disc 1/desktop.ini
./Billy Joel/Greatest Hits, Vols. 1 & 2 (1973-1985), Disc 2/desktop.ini
./Billy Joel/desktop.ini

So now, I'd like to delete all these files using a shell script but as you can see there are a ton of unpredictable "special characters" that will derail the simple rm <$filepath> script. I did try (once) a single line enclosing the whole string in single (') quotes - and it worked.

rm './Aerosmith/Classics Live!, Vol. 2/desktop.ini'

But in the shell script (and replacing rm syntax with ls syntax) it crashes gloriously (of course). Can someone point me to a 'how-to' or resource (besides a general regex, sed, or awk text) to see how to handle the file path so it is presented correctly?

I thought of a few things like counting the '/' characters, finding the last one, create a new string, 'cd' to that directory, delete the file, move back to the root and repeat. This makes logical sense, but it requires the string to not break the code - which it does (so far). Thanks for any ideas on this.

Sam
  • 1

2 Answers2

0

$ find . -name "desktop.ini" | sed -re "s/^/rm '/" -e "s/$/'/"

... creates the commands to execute, so - after checking that output for correctness...

... | bash
at the end of it should then make it happen.


$ sed -re <~/desktop-ini.txt "s/^/rm '/" -e "s/$/'/" 
rm './Billy Joel/Greatest Hits, Vols. 1 & 2 (1973-1985), Disc 1/desktop.ini'
rm './Billy Joel/Greatest Hits, Vols. 1 & 2 (1973-1985), Disc 2/desktop.ini'
rm './Billy Joel/desktop.ini'
Hannu
  • 10,568
0

From your examples, it seems that you're trying to make 'find' generate a script that'll run something later, as opposed to just running something directly, even though I suspect you really wanted to just run something directly.

If you read the file names into shell variables, you can use ${...@Q} with recent Bash versions to ask it to quote a value in a way that it'd be able to dequote later. It'll handle any special character that might cause issues

find (or cat) ... | while IFS="" read -r path; do
    echo "rm ${path@Q}"
done

For older versions:

...; do
    printf 'rm %q\n' "$path"
done

But all of that is only needed if you specifically want a two-step process (i.e. generate a shell script first, run the shell script second). If all you want is to immediately run something for each line – you don't need to handle special characters at all, as long as the variable expansion itself is kept inside quotes.

find or cat | while IFS="" read -r path; do
    rm -i "$path"
done

(This works because a variable expansion is not a "macro" expansion – the interpreter recognizes 'quotes surrounding the variable' as a distinct thing from 'quotes that are part of the expanded value'. So even if $path contains a " or $ itself, that won't affect the result.)

A more conventional approach would be to have xargs run the command. Its default mode also expects quoted input, but in your case it's enough to specify \n as the separator:

find (or cat) ... | xargs -d '\n' rm -i

Finally, GNU find can also run commands by itself:

find ... -exec rm -i {} \;
find ... -ok rm {} \;

I thought of a few things like counting the '/' characters, finding the last one, create a new string, 'cd' to that directory,

That's useless; the OS already does that for you when you try to rm or ls the path.

grawity
  • 501,077