14

I want to monitor the log file of my application which however doesn't work locally but on a SaaS platform and is exposed over HTTP and WebDAV. So, an equivalent of tail -f that works for URLs would do great job for me.

P.S. If you know of any other tools that can monitor remote files over HTTP, it may also be of help. Thanks

munch
  • 243

4 Answers4

15

There may be a specific tool for this, but you can also do it using wget. Open a terminal and run this command:

while :; do 
    sleep 2
    wget -ca -O log.txt -o /dev/null http://yoursite.com/log
done

This will download the logfile every two seconds and save it into log.txt appending the output to what is already there (-c means continue downloading and -a means append the output to the file name given). The -o redirects error messages to /dev/null/.

So, now you have a local copy of log.txt and can run tail -f on it:

tail -f log.txt 
terdon
  • 54,564
6

curl with range option in combination with watch can be used to achieve this:

RANGES

HTTP 1.1 introduced byte-ranges. Using this, a client can request to get only one or more subparts of a specified document. Curl supports this with the -r flag.

watch -n <interval> 'curl -s -r -<bytes> <url>'

For example

watch -n 30 'curl -s -r -2000 http://yoursite.com/log'

This will retrieve the last 2000 bytes of the log every 30 seconds.

Note: for self signed https use --insecure curl option

ghm1014
  • 301
5

I answered the same question over here with a complete shell script that takes the URL as it's argument and tail -f's it. Here's a copy of that answer verbatim:


This will do it:

#!/bin/bash

file=$(mktemp)
trap 'rm $file' EXIT

(while true; do
    # shellcheck disable=SC2094
    curl --fail -r "$(stat -c %s "$file")"- "$1" >> "$file"
done) &
pid=$!
trap 'kill $pid; rm $file' EXIT

tail -f "$file"

It's not very friendly on teh web-server. You could replace the true with sleep 1 to be less resource intensive.

Like tail -f, you need to ^C when you are done watching the output, even when the output is done.

Brian
  • 51
0

I have created a powershell script which

  1. Gets the content from given url every 20 secons
  2. Gets only specific amount of data using "Range" HTTP request header.
while ($true) {
    $request = [System.Net.WebRequest]::Create("https://raw.githubusercontent.com/fascynacja/blog-demos/master/gwt-marquee/pom.xml")
    $request.AddRange(-1000)
    $response = $request.GetResponse()
    $stream = $response.GetResponseStream()
    $reader = New-Object System.IO.StreamReader($stream)
    $content = $reader.ReadToEnd()
    $reader.Close()
    $stream.Close()
    $response.Close()
Write-Output $content

Start-Sleep -Seconds 20

}

You can adjust the Range and the Seconds to your own needs. Also If needed you can easily add color patterns for specific search terms.

fascynacja
  • 101
  • 2