1

In our yocto-based embedded application, we now have several Golang binaries, which can become very large. For example, docker (from meta-virtualization) and related binaries weigh in with several 100 megabytes if uncompressed. Therefore, we created recipes to compress those binaries with upx. As an example, here is our docker-ce_git.bbappend:

do_install_append() {
  /usr/bin/upx --brute ${D}/${bindir}/docker
  /usr/bin/upx --brute ${D}/${bindir}/dockerd
  /usr/bin/upx --brute ${D}/${bindir}/docker-proxy
}

This leads to the following error during the bake process:

ERROR: docker-ce-19.03.2-ce+git6a30dfca03664a0b6bf0646a7d389ee7d0318e6e-r0 do_package: QA Issue: File '/usr/bin/docker' from docker-ce was already stripped, this will prevent future debugging! [already-stripped]
ERROR: docker-ce-19.03.2-ce+git6a30dfca03664a0b6bf0646a7d389ee7d0318e6e-r0 do_package: QA Issue: File '/usr/bin/docker-proxy' from docker-ce was already stripped, this will prevent future debugging! [already-stripped]
ERROR: docker-ce-19.03.2-ce+git6a30dfca03664a0b6bf0646a7d389ee7d0318e6e-r0 do_package: QA Issue: File '/usr/bin/dockerd' from docker-ce was already stripped, this will prevent future debugging! [already-stripped]

OK, this makes sense. If we wanted to create a debug build, we wouldn't want to strip those binaries.

But: how do we do conditional stripping correctly? I guess there is some kind of stripping stage only executed for non-debug builds, which we could attach to with do_strip_append() or something similar, but so far, I have come up empty searching the documentation.

2 Answers2

0

I have found kind of a hacky solution to this. Instead of attaching yourself to a package-related recipe, you can attach yourself to image generation. Since we use core-image-minimal as target, I created a core-image-minimal.bbappend with the following content:

ROOTFS_POSTPROCESS_COMMAND += "compress_docker_binaries;"

# Need to run in bash because pysh has no job control
compress_docker_binaries() {
  /bin/bash -x -c '
    set -e
    files=(containerd containerd-shim containerd-ctr docker dockerd \
      docker-proxy runc)
    for (( i=0; i<${#files[@]}; i++ )); do
      /usr/bin/upx --brute "$0/$1/${files[${i}]}" &
      pids[${i}]=$!
    done;
    for pid in ${pids[@]}; do
      wait ${pid}
    done
  ' "${IMAGE_ROOTFS}" "${bindir}"
}

By attaching compress_docker_binaries to ROOTFS_POSTPROCESS_COMMAND, we make sure the files are compressed after any debug symbols have been extracted.

There are some downsides to this solution:

  • You don't attach to the actual package recipe, instead to the much more general image recipe.
  • You could still keep your .bbappend files in separate directories using separate functions. However, yocto will execute them sequentially even though in this particular case you could run upx on the binaries in parallel.
  • Alternatively, you could force the jobs to run in parallel (as I did with the little bash hack (pysh has no job control)), but then everything has to be in one file. It also kind of hardcodes the number of jobs running in parallel.
0

Late to the party here, but you can just include

INSANE_SKIP_${PN} += "already-stripped"

See the reference documentation for more info.

Ben Doerr
  • 558