To complete my comments above (sorry for the formal inconveniences / inconsistencies) : I would say that it was worth it, even though I don't quite understand why. The second attempt, recovering to an Ext4 partition, had a significantly higher copying rate at the beginning (about 90 MB/s on average, whereas I only had about 50 MB/s at best for the first attempt, recovering to an NTFS partition), and no errors or even slowdowns. But then, after copying about 165 GB (so earlier than before), it became very unstable and slowed down to a crawl, made clicking and whirring noises again (it was a very hot period which didn't help – I tried to cool it down as much as possible, using a laptop cooling pad below and a freezing pack over it, changed every hour or so) ; I tried again and again (sometimes it got back to a 120 MB/s rate for a few seconds then back to 0), but I had to abandon it after a while.
Here is a ddrescueview map of the first recovery:

There's an interesting pattern, with stripes of easily recovered data alternating with very slow or unreadable data. [From what I know, it would seem to indicate that a head came in contact with a platter, damaging the surface and releasing magnetic dust, which then spread with the centrifugal force. And since the servo track (which contains essential information for the startup process) is located at the outer edge of the hard drive (it's a 3.5" Hitachi 1 TB), some of that dust may have reached it, making it difficult to access, which could explain the frequent clicking noises at startup.] (Correct me if I'm wrong.) => [EDIT 20200501] Thas was wrong, actually this pattern typically indicates that one head of the drive has completely failed and is no longer reading anything, the data on the platters may be still readable at this point but it would require the replacement of the head stack assembly, which only a specialized data recovery lab can safely perform.
Here is a ddrescueview map of the second recovery:

So the hard drive became very unstable and the recovery increasingly difficult after about 165 GB, but before that the copying rate was consistently high, with no skipped areas. I later used the ddru_ntfsbitmap method, for the last attempts, so the unallocated space was mostly skipped.
Here is a ddrescueview map of the log file created with ddru_ntfsbitmap, showing the areas of the hard drive containing actual data in green, and free space in grey:

Luckily, most of the actual data was located in the first quarter and was successfully recovered.
Now I have yet to combine the good parts of those two images, and extract the actual files, probably with R-Studio (best data recovery software that I have tried).
One thing I later found out which is interesting and peculiar, regarding my initial question (I guess I should have put this as a comment, as per the formal rules, but it would have been too long and I couldn't have provided screenshots).
I attempted to copy the rescued areas of image 2, on the Ext4 partition, which were missing in image 1, to that image 1,
on the NTFS partition{1}, which should have been done at a very high rate (input and output being on a healthy 2 TB HDD), yet I got an average speed of only 660 KB/s, thus very close to the speed of the initial recovery at a later stage when I got concerned enough to ask this question in the first place...
Command used (log file for image 2 used as a domain logfile) :
ddrescue -m [image2.log] [image2] [image1] [image1.log]
Screenshot:

So I stopped and did the opposite : I copied the rescued areas in image 1 (NTFS) which were missing in image 2 (Ext4), to that image 2 — and now the copying rate was about 43000 KB/s or 43 MB/s on average (maybe slightly slower than what should be expected for a copy on the same HDD, for a Seagate 2 TB which has a max write speed close to 200 MB/s, so should be able to reach about 100 MB/s for a copy from one partition to the other, but still almost 100× better than the first attempt). What would be the explanation for such a tremendous discrepancy?
Command used (log file for image 1 used as a domain logfile) :
ddrescue -m [image1.log] [image1] [image2] [image2.log]
Screenshot:

I noticed that the image files on both partitions had a “size on disk” {2} corresponding to the amount of data which had been actually written, very far from the total size (1 TB or 931.5 GB), even though I didn't use the -S switch (“use sparse writes for output files”). Image 2 (after being completed with extra rescued areas from image 1) has a “size on disk” of 308.5 GB, while image 1 has a “size on disk” of 259.8 GB. Could it be related with the slow copy rate, if the Linux NTFS driver somehow has trouble dealing with sparse writings? And how come the whole size was not allocated as soon as sectors at the end were written, considering that I did not use that -S switch?
I tried to use the -p switch (“preallocate”) at the very beginning of the process, thinking that it would be “cleaner”, more straightforward, easier to deal with in case something would go wrong (if the recovery needs to be recovered...), but I had to stop as it was way too long and I wanted to get started ASAP (apparently it actually writes empty data instead of simply allocating the required sectors). Then I figured that by using the -R switch (“reverse”) temporarily, it would write the very last sectors to the output file, thus allocating the full size as I intended; it indeed resulted in increasing the size of the output file to 931.5 GB, but the “size on disk” was in fact way lower (I noticed it later when accessing the HDD used for that recovery on Windows and seeing the abnormally high amount of free space).
________________
{1} I still don't understand how the second recovery attempt could produce a so much better outcome for the first 100 GB or so, despite the fact that the health status of the HDD had declined in the meantime. [EDIT 20200501] => It is possibly because of the a500000 parameter used at first, which skipped areas for which the reading speed was below the 500KB/s threshold. Without that option, the second time around, it read slower areas right away. In fact those slower areas were associated with a weak head, so it's still puzzling that this failing head managed to get as much data the second time around although it had already showed signs of malfunction. I'm still learning...
{2} By the way, the word “disk” should be replaced, on Windows and Linux systems alike, as there are data storage units which are not “disks”...