68

Note that I can't first store the file locally -- it's too big.

This (obnoxious) page (scroll all the way to the bottom) seems to give an answer but I'm having trouble disentangling the part that's specific to tape drives:

http://webcache.googleusercontent.com/search?q=cache:lhmh960w2KQJ:www.experts-exchange.com/OS/Unix/SCO_Unix/Q_24249634.html+scp+redirect&cd=3&hl=en&ct=clnk&gl=us

To make this more concrete, here's how you would think it might work:

On local machine:

% echo "pretend this string is a huge amt of data" | scp - remote.com:big.txt

(That's using the convention -- which scp does not in fact support -- of substituting a dash for the source file to tell it to get it from stdin instead.)

Jens Erat
  • 18,485
  • 14
  • 68
  • 80
dreeves
  • 1,063

10 Answers10

98

You can pipe into ssh and run a remote command. In this case, the remote command is cat > big.txt which will copy stdin into the big.txt file.

echo "Lots of data" | ssh user@example.com 'cat > big.txt'

It's easy and straightforward, as long as you can use ssh to connect to the remote end.

You can also use nc (NetCat) to transfer the data. On the receiving machine (e.g., host.example.com):

nc -l 1234 > big.txt

This will set up nc to listen to port 1234 and copy anything sent to that port to the big.txt file. Then, on the sending machine:

echo "Lots of data" | nc host.example.com 1234

This command will tell nc on the sending side to connect to port 1234 on the receiver and copy the data from stdin across the network.

However, the nc solution has a few downsides:

  • There's no authentication; anyone could connect to port 1234 and send data to the file.
  • The data is not encrypted, as it would be with ssh.
  • If either machine is behind a firewall, the chosen port would have to be open to allow the connection through and routed properly, especially on the receiving end.
  • Both ends have to be set up independently and simultaneously. With the ssh solution, you can initiate the transfer from just one of the endpoints.
Barry Brown
  • 1,950
20

Using ssh:

echo "pretend this is a huge amt of data" | ssh user@remote.com 'cat > big.txt'
bpf
  • 223
5

Use nc (Net Cat), which doesn't need to save the file locally.

mcandre
  • 3,098
4

Here is an alternative solution:

All of the examples above suggesting ssh+cat assume that "cat" is available on the destination system.

In my case the system (Hetzner backup) had a very restrictive set of tools offering sftp, but not a complete shell. So using ssh+cat was not possible. I came up with a solution that uses undocumented "scp -t" flag. The complete script may be found below.

#!/bin/bash

function join_e
{
  for word in $*; do
    echo -n "--exclude=$word "
  done
}

CDATE=`date +%Y%m%d%H%M%S`

# Make password available to all programs that are started by this shell.
export OPASS=YourSecretPasswrodForOpenSslEncryption

#-----------------------------------------------

# Directory and file inclusion list
ILIST=(
  var/lib
)

# Directory and file exclusion list
ELIST=(
  var/lib/postgresql
)

# 1. tar: combine all files into a single tar archive
#      a. Store files and directories in ILIST only.
#      b. Exclude files and directories from ELIST.
# 2. xz: compress as much as you can utilizing 8 threads (-T8)
# 3. openssl: encrypt contents using a password stored in OPASS local environment variable
# 4. cat: concatenate stream with SCP control message, which has to be sent before data
#      a. C0600 - create a file with 600 permissions
#      b. 107374182400 - maximum file size
#         Must be higher or equal to the actual file size.
#         Since we are dealing with STDIN, we have to make an educated guess.
#         I've set this value to 100x times my backups are.
#      c. stdin - dummy filename (unused)
# 5. ssh: connect to the server
#      a. call SCP in stdin (-t) mode.
#      b. specify destination filename

nice -n 19 bash -c \
   "\
   tar $(join_e ${ELIST[@]}) -cpf - -C / ${ILIST[*]} \
   | xz -c9e -T8 \
   | openssl enc -aes-256-cbc -pass env:OPASS \
   | cat <(echo 'C0600 107374182400 stdin') - \
   | ssh username@server.your-backup.de "\'"scp -t backup-${CDATE}.tar.xz.enc"\'"\
   "

Update 2019.05.08:

As per request, below is a much simpler and shorter version.

#!/bin/sh

# WORKS ON LARGE FILES ONLY

cat filename.ext \
| cat <(echo 'C0600 107374182400 stdin') - \
| ssh user@host.dom 'scp -t filename.ext'

Update 2020.01.18:

Hetzner SFTP connections time out pretty fast. To extend timeouts use the following ssh options:

#!/bin/sh

# ...
| ssh -o TCPKeepAlive=yes -o ServerAliveInterval=30 -o ServerAliveCountMax=3 user@host.dom 'scp -t filename.ext'
2

Thanx Denis Scherbakov!

When I tried your script on the Hetzner cloud I got

debug1: Sending command: scp -v -t backup-20180420120524.tar.xz.enc
debug1: client_input_channel_req: channel 0 rtype exit-status reply 0
debug1: channel 0: free: client-session, nchannels 1
debug1: fd 0 clearing O_NONBLOCK
Transferred: sent 4168, received 2968 bytes, in 0.0 seconds
Bytes per second: sent 346786.6, received 246944.0

But only a file with no content got created. As the actual content is already encrypted with openssl, we actually don't need scp. The linux builtin ftp also has great piping capabilities. So here's my (still quite manual) solution:

#!/bin/bash

function join_e
{
  for word in $*; do
    echo -n "--exclude=$word "
  done
}


# Directory and file inclusion list
ILIST=(
  /home
)

# Directory and file exclusion list
ELIST=(
  var/lib/postgresql
)



export OPASS=fileencryptionpassword

nice -n 19 bash -c \
   "\
   tar $(join_e ${ELIST[@]}) -cpvf - -C / ${ILIST[*]} \
   | xz -c9e -T8 \
   | openssl enc -aes-256-cbc -pass env:OPASS \
   "

# decrypt with:
# cat backup.tar.xz.enc | openssl  aes-256-cbc -d  -pass env:OPASS | xz -dc | tar xv

# invocation procedure for ftp:
# $ ftp -np
# ftp> open storage.com
# ftp> user  storageuser storagepass
# ftp> put "| bash ~/backup.sh" backup.tar.xz.enc
2

Use a FIFO pipe:

mknod mypipe p
scp mypipe destination &
ls > mypipe
Sjoerd
  • 1,448
1

1) Create fifo pipe and cat on Remote machine:

Remote# mknod /tmp/mypipe p # mkfifo /tmp/mypipe
Remote$ cat /tmp/mypipe > big.txt& exit

2) Send data to remote fifo pipe from Local machine with scp:

Local$ echo "the huge amt of data string" | scp /dev/stdin user@Remote:/tmp/mypipe
1

I had a similar problem with MacOS/Zsh. Based on Ray Butterworth's answer here and this from Adaephon elsewhere on SuperUser this worked for me:

scp =(echo "Hi There!") remote_server:/Users/me/new_file.txt

Just make sure to give a filename in the second argument to scp, otherwise you'll get an unexpected name on the remote server.

JS.
  • 121
0

You can use curl for this job if you can't use cat at the remote server for example. Using sftp protocol you can copy anything you send to its stdin via ssh. You can use ssh key instead of password to authenticate. I didn't try size limitation, but it should work like other solutions and is authenticated and syntax is clean.

Just don't try to switch curl to scp protocol, because it won't work. scp-not-working

echo "pretend this string is a huge amt of data" | curl -k -T - -u user:password sftp://remote.host/big.txt
0
echo "Lots of data" | ssh user@example.com 'tee big.txt'

With sudo:

echo "Lots of data" | ssh user@example.com 'sudo -S tee big.txt'
Vincent
  • 101