I was reading about the best practices of a Jenkins pipeline. I have created a declarative pipeline which is not executing parallel jobs and I want to run everything on the same slave.
I use:
agent {
    label 'xxx'
}
The rest of my pipeline looks like:
pipeline {
        agent {
            label 'xxx'
        }
    triggers {
        pollSCM pipelineParams.polling
    }
    options {
        buildDiscarder(logRotator(numToKeepStr: '3'))
    }
    stages {
        stage('stage1') {
            steps {
                xxx
            }
        }
        stage('stage2') {
            steps {
                xxx
            }
        }
    }
    post {
        always {
            cleanWs()
        }
        failure {
            xxx"
        }
        success {
            xxx         
        }
    }
}
Now I read the best practices here. Point 4 is telling:
- Do: All Material Work Within a Node
 Any material work within a pipeline should occur within a node block.
Why? By default, the Jenkinsfile script itself runs on the Jenkins master, using a lightweight executor expected to use very few resources. Any material work, like cloning code from a Git server or compiling a Java application, should leverage Jenkins distributed builds capability and run an agent node.
I suspect this is for scripted pipelines.
Now my questions are:
Do I ever have to create a node inside a stage in a declarative pipeline (it is possible) or do I have to use agent inside the stage when I want to run my stage on another specific agent?
My current pipeline has defined a label which is on 4 agents. But my whole pipeline is always executed on one agent (what I want) but I would suspect it's executing stage1 on slaveX and maybe stage2 on slaveY. Why is this not happening?