If the program you used did not re-write the URLs for you automatically, you arguably have two basic options...
The first option is to create a personal server that mirrors the content you downloaded. You would set up a web server such as Apache or Nginx (or another choice) on your local PC and then create at least one corresponding virtual host for the outdated domain(s). This host (or hosts) would house the files you downloaded. You would need some form of DNS resolution to map the old domain to your local web server but this could likely be done with your hosts file or even a full local DNS setup with BIND or similar software.
If the above solution is undesirable, you are likely left with rewriting the URLs yourself. You can of course do this by hand but (depending on the size of the project) you may want to look at the Python scripting language and a module called Beautiful Soup. Beautiful soup is made for parsing web pages and can potentially rewrite links (assuming you write the script to do so).
As a small caveat, whichever method you choose, you will likely still need to look at the HTML source code to determine which links need attention and which don't. It is also helpful to remember that external links may not work regardless, assuming archive.org didn't make a copy of that content as well.