I am trying to use hyper to grab the content of an HTML page and would like to synchronously return the output of a future. I realized I could have picked a better example since synchronous HTTP requests already exist, but I am more interested in understanding whether we could return a value from an async calculation.
extern crate futures;
extern crate hyper;
extern crate hyper_tls;
extern crate tokio;
use futures::{future, Future, Stream};
use hyper::Client;
use hyper::Uri;
use hyper_tls::HttpsConnector;
use std::str;
fn scrap() -> Result<String, String> {
    let scraped_content = future::lazy(|| {
        let https = HttpsConnector::new(4).unwrap();
        let client = Client::builder().build::<_, hyper::Body>(https);
        client
            .get("https://hyper.rs".parse::<Uri>().unwrap())
            .and_then(|res| {
                res.into_body().concat2().and_then(|body| {
                    let s_body: String = str::from_utf8(&body).unwrap().to_string();
                    futures::future::ok(s_body)
                })
            }).map_err(|err| format!("Error scraping web page: {:?}", &err))
    });
    scraped_content.wait()
}
fn read() {
    let scraped_content = future::lazy(|| {
        let https = HttpsConnector::new(4).unwrap();
        let client = Client::builder().build::<_, hyper::Body>(https);
        client
            .get("https://hyper.rs".parse::<Uri>().unwrap())
            .and_then(|res| {
                res.into_body().concat2().and_then(|body| {
                    let s_body: String = str::from_utf8(&body).unwrap().to_string();
                    println!("Reading body: {}", s_body);
                    Ok(())
                })
            }).map_err(|err| {
                println!("Error reading webpage: {:?}", &err);
            })
    });
    tokio::run(scraped_content);
}
fn main() {
    read();
    let content = scrap();
    println!("Content = {:?}", &content);
}
The example compiles and the call to read() succeeds, but the call to scrap() panics with the following error message:
Content = Err("Error scraping web page: Error { kind: Execute, cause: None }")
I understand that I failed to launch the task properly before calling .wait() on the future but I couldn't find how to properly do it, assuming it's even possible.
 
     
     
     
    