I'm trying to replicate the AWS Lambda solution of the following tutorial with Go AWS SDK. My process is a little bit more complicated though because I want to be able to do the following:
- Convert the video to 
h265 - Transpose it to 
m3u8(HLS) - Upload the 
h265video directly from pipe (without local copy) - Upload the 
m3u8andtsfiles (apparently impossible to do without local copy, not sure though) 
I came up with the following:
func upload(video_name string) {
    var compress_args = []string{
        "-loglevel", "fatal",
        "-i", video_name + ".mp4",
        "-vcodec", "libx265",
        "-x265-params", "log-level=fatal",
        "-crf", "28",
        "-f", "mp4",
        "-movflags", "frag_keyframe+empty_moov",
        "-", // pipes output to stdout
    }
    var transpose_args = []string{
        "-loglevel", "fatal",
        "-i", "-", // Read input from stdin
        "-vcodec", "libx265",
        "-x265-params", "log-level=fatal",
        "-crf", "28",
        "-start_number", "0",
        "-hls_time", "1",
        "-hls_list_size", "0",
        "-f", "hls", "tmp/" + video_name + ".m3u8",
    }
    cmd := exec.Command("ffmpeg", compress_args...)
    cmd2 := exec.Command("tee", "new_" + video_name + ".mp4") // how to improve that ?
    cmd3 := exec.Command("ffmpeg", transpose_args...)
    cmd.Stderr = os.Stderr
    cmd2.Stderr = os.Stderr
    cmd3.Stderr = os.Stderr
    cmd2.Stdin, _ = cmd.StdoutPipe()
    cmd3.Stdin, _ = cmd2.StdoutPipe()
    cmd3.Stdout = os.Stdout
    cmd.Start()
    cmd2.Start()
    cmd3.Start()
    cmd.Wait()
    cmd2.Wait()
    cmd3.Wait()
}
For now, what this does is that it pipes the compression, the tee and the transpose commands. The tee creates a local copy of the .mp4 and relay to the compression command. Now, I would like to be able to pass tee's output to the Body parameter for s3.PutObjectInput.
Should I use a Named Pipe, let tee write on it and then somehow pass that to the PutObjectInput? is it even possible? what are the other possibilities ?
Edit
I made the compressed file upload work by using named pipes and not waiting for the commands to finish (original idea). Here is the new code:
func upload(video_name string) {
    // ARGS
    syscall.Mkfifo("pipe", 0644)
    // CMDS and STDOUT/STDIN redirections 
    named_pipe, _ := os.OpenFile("pipe", os.O_RDONLY|syscall.O_NONBLOCK, 0600)
    defer named_pipe.Close()
    syscall.SetNonblock(int(named_pipe.Fd()), false)
    sess := session.Must(session.NewSession(&aws.Config{
        Region: aws.String(region),
    }))
    uploader := s3manager.NewUploader(sess, func(u *s3manager.Uploader) {
        u.PartSize = 5 * 1024 * 1024 // 5MB
    })
    compress_cmd.Start()
    tee_cmd.Start()
    transpose_cmd.Start()
    _, err := uploader.UploadWithContext(context.Background(), &s3manager.UploadInput{
        Bucket: aws.String(bucket),
        Key:    aws.String(video_name + ".mp4"),
        Body:   &reader{named_pipe},
    })
}
where reader is:
type reader struct {
    r io.Reader
}
func (r *reader) Read(p []byte) (int, error) {
    return r.r.Read(p)
}
However, is there a way to wait for the whole process to finish before uploading the HLS output?
I tried waiting for transpose_cmd to finish after the UploadWithContext but this makes the Uploader quit prematurely.