Just noticed that I have this question from a few years ago still open. While I wasn't able to find a suitable option beyond a generic crawler at the time, multiple options have since popped up on sites like GitHub. While I haven't used any of them personally, I would like to document it here for those still searching for a way to do this.
An example is hartator/wayback-machine-downloader, which appears to be platform agnostic (a Ruby .gem). It describes how it works as follows:
It will download the last version of every file present on Wayback Machine to ./websites/example.com/. It will also re-create a directory structure and auto-create index.html pages to work seamlessly with Apache and Nginx. All files downloaded are the original ones and not Wayback Machine rewritten versions. This way, URLs and links structure are the same as before.
Hope that helps someone who has the same problem I did many years ago. Going to mark as solved with this, unless someone has a better answer.