At the time of writing, the timepoint data is available in the v1beta1 release of Google cloud text-to-speech.
I didn't need to sign on to any extra developer program in order to access the beta, beyond the default access.
Importing in Python (for example) went from:
from google.cloud import texttospeech as tts
to:
from google.cloud import texttospeech_v1beta1 as tts
Nice and simple.
I needed to modify the default way I was sending the synthesis request to include the enable_time_pointing flag.
I found that with a mix of poking around the machine-readable API description here and reading the Python library code, which I had already downloaded.
Thankfully, the source in the generally available version also includes the v1beta version - thank you Google!
I've put a runnable sample below. Running this needs the same auth and setup you'll need for a general text-to-speech sample, which you can get by following the official documentation.
Here's what it does for me (with slight formatting for readability):
$ python tools/try-marks.py
Marks content written to file: .../demo.json
Audio content written to file: .../demo.mp3
$ cat demo.json
[
  {"sec": 0.4300000071525574, "name": "here"},
  {"sec": 0.9234582781791687, "name": "there"}
]
Here's the sample:
import json
from pathlib import Path
from google.cloud import texttospeech_v1beta1 as tts
def go_ssml(basename: Path, ssml):
    client = tts.TextToSpeechClient()
    voice = tts.VoiceSelectionParams(
        language_code="en-AU",
        name="en-AU-Wavenet-B",
        ssml_gender=tts.SsmlVoiceGender.MALE,
    )
    response = client.synthesize_speech(
        request=tts.SynthesizeSpeechRequest(
            input=tts.SynthesisInput(ssml=ssml),
            voice=voice,
            audio_config=tts.AudioConfig(audio_encoding=tts.AudioEncoding.MP3),
            enable_time_pointing=[
                tts.SynthesizeSpeechRequest.TimepointType.SSML_MARK]
        )
    )
    # cheesy conversion of array of Timepoint proto.Message objects into plain-old data
    marks = [dict(sec=t.time_seconds, name=t.mark_name)
             for t in response.timepoints]
    name = basename.with_suffix('.json')
    with name.open('w') as out:
        json.dump(marks, out)
        print(f'Marks content written to file: {name}')
    name = basename.with_suffix('.mp3')
    with name.open('wb') as out:
        out.write(response.audio_content)
        print(f'Audio content written to file: {name}')
go_ssml(Path.cwd() / 'demo', """
    <speak>
    Go from <mark name="here"/> here, to <mark name="there"/> there!
    </speak>
    """)