The typical (but not only possible) situation is that your OS doesn't have direct access to the NAND that forms SSDs - it's behind a controller and might also be additionally behind a HCI that was designed for classic platter hard drives (SATA), or a bus protocol like USB or SDIO/eMMC.
Both SATA and NVMe address storage using LBAs. Your OS can ask the device to read or write one or more LBAs - an LBA contains 512 or 4096 bytes.
This didn't change with SSDs except an additional command was added - TRIM - meaning the OS can tell the device it's not using certain LBAs anymore. That is the only real new thing on an I/O level exposed to the OS with SSDs. NVMe has additional features but they are all related to splitting the device up (namespaces) or processing requests faster (queues, etc.)
Your OS is responsible for taking file-based requests, e.g. servicing APIs
such as open/read/write/close syscalls, and converting them through appropriate kernel-level code to chains of storage commands. All modern OSs will also issue TRIM commands when files are deleted, if the drive reports it supports them.
I then wondered, what happens on the physical storage medium when I remove and recreate or just open and truncate the same two files repeatedly? Could parts of the operation I have implemented circumvent wear leveling, accelerating the wear of the storage medium, specifically if it is an SSD?
You can't 100% know for sure unless you have some knowledge of the SSD firmware and how it responds precisely to commands. \
But understand that the following typically applies for NAND-based SSDs:
I/O speed is fastest when data is written to erased flashed.
NAND is broken up into eraseblocks, and each eraseblock contains pages. Pages can be written to, but erasing can only be done on the whole eraseblock.
Erasing is slow.
So a task that SSD firmware has to do is to try to keep as much "erased" eraseblocks around as possible. Also pages and eraseblocks often are not 512 or 4096 bytes - 512 being the LBA size that most OSes have expected/assumed for decades. So SSDs keep a mapping table - LBA to "Physical Block Address" or PBA.
It's very likely that writing to any LBA actually causes a write to a new or different PBA--because it will prefer to write to a freshly erased page unless it's running low--and then the SSD updates its LBA-to-PBA table. When there are too many LBAs written and the SSD is having trouble keeping open eraseblocks, that's when your SSD starts to get slow.
When you delete files and your OS issues TRIM commands, then the SSD can unmap PBAs from it's LBA to PBA table and scour its NAND for eraseblocks it can corral and ready for new data by erasing.
Moral of the story: SSDs do a lot of work internally and none of this is exposed to your OS, the SSD firmware handles it all. You can't do anything about any of it, except make sure your OS is issuing TRIM commands, or issue them manually if supported by your OS (Linux has blkdiscard).
So ANY write to an SSD will affect it's wear leveling, but how or how much is not easily predictable or connected to what you are writing, other than the amount of data you are writing.
Most OS file APIs - such as truncate or open, will affect the directory stored on disk as well as the file data. So really anything you do there is going to cause writes to an SSD.