The Gitlab CI cache doesn't work quite like that. If you have a job that installs npm dependencies, as an example, you could cache the resulting node_modules directory so npm install doesn't need to be run again, but it won't help for things like installing system packages.
Regarding the docker:dind service, you won't be able to run commands like docker build... or docker push ... without that service, even if you switched the image your job uses to docker:latest. It's a bit counterintuitive, but the only way to be able to run those commands is to use the docker-in-docker service.
However, you're not out of luck. I would recommend that you move the step in your before_script stage to your own docker image that extends amazon/aws-cli. As long as you have access to docker hub, Gitlab's included Registry (if using gitlab.com it's available, otherwise an admin has to enable/configure it), Amazon's registry (ECR I think?), or a privately run registry, you can create your own custom images and use them in Gitlab CI pipelines.
Here's an example Dockerfile:
FROM amazon/aws-cli
RUN amazon-linux-extras install docker
That's all you need to extend the existing amazon/aws-cli image and move your before_script installations to Docker. Once the file is done, run
docker build /path/to/dockerfile-directory -t my_tag:latest
After that, you'll need to login to your registry, docker login my.registry.example.com, and push up the image docker push my_tag:latest. If you're not using Gitlab's registry or the public docker hub, you'll need to configure your jobs or runners (either) so they can authenticate with your registry. You can read about that here: https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#define-an-image-from-a-private-container-registry
Next you just have to use it in your pipelines:
build and push docker image:
  stage: publish  
  variables:
    DOCKER_REGISTRY: amazon-registry
    AWS_DEFAULT_REGION: ap-south-1
    APP_NAME: sample-app
    DOCKER_HOST: tcp://docker:2375
  image: 
    name: my_tag:latest
    entrypoint: [""]
  services:
    - docker:dind
  script:
    - docker build -t $DOCKER_REGISTRY/$APP_NAME:master .
    - aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY
    - docker push $DOCKER_REGISTRY/$APP_NAME:master
One other thing you can do to save pipeline time (if it applies) is to only run this step when your Dockerfile has changed. That way if it hasn't but other jobs rely on it, they can just reuse the last image created. You can do that with the rules keyword along with changes:
build and push docker image:
  stage: publish  
  variables:
    DOCKER_REGISTRY: amazon-registry
    AWS_DEFAULT_REGION: ap-south-1
    APP_NAME: sample-app
    DOCKER_HOST: tcp://docker:2375
  image: 
    name: my_tag:latest
    entrypoint: [""]
  services:
    - docker:dind
  when: never
  rules:
    - changes:
      - Dockerfile
      when: always
  script:
    - docker build -t $DOCKER_REGISTRY/$APP_NAME:master .
    - aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY
    - docker push $DOCKER_REGISTRY/$APP_NAME:master
The root-level when: never sets the default for the job to never run, but the rules section checks to see if there are changes to the Dockerfile (accepts multiple files if needed). If there are changes, the job will always be run.
You can see details about the rules keyword here: https://docs.gitlab.com/ee/ci/yaml/#rules
You can see details about custom docker images for Gitlab CI here: https://docs.gitlab.com/ee/ci/docker/using_docker_images.html