If extract_ds_job_url(get_ds_jobs(BASE_JOB_URL)) returns an empty set, you are forever calling get_all_downstream_jobs_urls(temp). That's because the for loop is not going to do anything.
The test at the top should check for None instead:
if ds_jobs is None:
and a separate test for ds_jobs being empty should end the recursion:
if not ds_jobs:
    # no downstream jobs to process
    return set()
I can't vouch for the rest of the logic, but there are certainly many style errors in the code. I'd refactor it to at least get rid of some of those errors:
- JENKINS_JOBSis never rebound, so- global JENKINS_JOBSis redundant and confusing and should be removed.
- It's not clear why the function is updating a global and is returning the result set. It should do one, or the other, not both.
- _is, by convention, a throw-away variable. It signals that the value is not going to be used. Yet here the code does use it. It should be renamed- job_urlinstead.
- You really never should use ;in production code. Put the code on separate lines.
- ds_jobs = set()then- ds_jobs.update(...)is a way too verbose spelling of- ds_jobs = set(...).
- tempis not a good variable name,- updatedmight be a better name. It should be made a copy when assigned, so- updated = set(ds_jobs), and the- .copy()call can be removed from the- forloop.
- the returnwhen the first job URL doesn't have downstream URLs is probably not what you want either.
- If you really want a tree of downstream URLs, the recursive call should not try to pass in all job urls collected so far! It's just as likely to call the jenkins API again and again for a job URL that was already checked.
The following code removes the recursion by using a stack instead, and is guaranteed to call the Jenkins API for each job URL just once:
def get_all_downstream_jobs_urls():
    ds_jobs = set()
    stack = [extract_ds_job_url(get_ds_jobs(BASE_JOB_URL))]
    while stack:
        job_url = stack.pop()
        if job_url in ds_jobs:
            # already seen before, skip
            continue
        ds_jobs.add(job_url)
        # add downstream jobs to the stack for further processing
        stack.extend(extract_ds_job_url(get_ds_jobs(job_url)))
    return ds_jobs
Last but not least, I strongly suspect that using a third-party library like the jenkinsapi package would make this all even simpler; the Jenkins API probably lets you query this information in just one call but a library probably does such a call for you and give you readily-parsed Python objects for the information.