Here's another take on an arbitrary number of sequential, dependent requests using Cycle.js, and the @cycle/fetch driver.
(Using GitHub users API. The users query returns 30 users per page and the since URL parameter is a user id number and starts the query at the next user id.)
First the primary part of the main function with comments:
const listResponse$ = sources.FETCH // response returned from FETCH driver
.mergeAll()
.flatMap(res => res.json())
.scan(
((userstotals, users) =>
[
userstotals[0] + 1, // page count
users[29] && users[29].id, // last id on full page
userstotals[2].concat(users) // collect all users
]
),
[0, undefined, []] // default accumulator
)
.share(); // allows stream split
// Branch #1 - cycle again for more pages
const listRequest$ = listResponse$
.filter(users =>
0 < users[1] && // full page id exists
maxpages > users[0]; // less than maxpages
)
.startWith('initial')
.map(users =>
`https:\/\/api.github.com/users?since=${
(!isNaN(parseInt(users[1], 10)) && users[1]) || // last id full page
idstart // default id start
}`
);
// Branch #2 - display
const dom$ = listResponse$
.map(userstotals => div(JSON.stringify(userstotals[2])));
(This is an updated answer. I realized the scans can be combined into one.)
EXPLANATION: First pull the response from the sources property FETCH, flatten it and pull the JSON out, then scan to count how many pages have been queried so far. The number of pages queried is later compared to maxpages so as not to exceed the predetermined number. Next get the last id of a full page, if exists, and last, concat the current users page with the collection of users pages accumulated so far. After accumulating the response information share the stream so it can be split into two branches.
The first branch is used to re-cycle the query back through the FETCH driver to query more pages. But first filter to check for the last page and number of pages queried. If the id is not a number then the last page has been reached. Do not continue if the last page is already reached and therefore no more pages to query. Also do not continue if the number of pages queried exceeds the value of maxpages.
The second branch simply reaches into the accumulated response to get the full list of users, then JSON.stringifys the object and converts it to a virtual dom object (div method) to be sent to the DOM driver for displaying.
And here's the complete script:
import Cycle from '@cycle/rx-run';
import {div, makeDOMDriver} from '@cycle/dom';
import {makeFetchDriver} from '@cycle/fetch';
function main(sources) { // provides properties DOM and FETCH (evt. streams)
const acctok = ''; // put your token here, if necessary
const idstart = 19473200; // where do you want to start?
const maxpages = 10;
const listResponse$ = sources.FETCH
.mergeAll()
.flatMap(res => res.json())
.scan(
((userstotals, users) =>
[
userstotals[0] + 1, // page count
users[29] && users[29].id, // last id on full page
userstotals[2].concat(users) // collect all users
]
),
[0, undefined, []]
)
.share();
const listRequest$ = listResponse$
.filter(function (users) {
return 0 < users[1] && maxpages > users[0];
})
.startWith('initial')
.map(users =>
`https:\/\/api.github.com/users?since=${
(!isNaN(parseInt(users[1], 10)) && users[1]) || // last id full page
idstart // default id start
}` //&access_token=${acctok}`
);
const dom$ = listResponse$
.map(userstotals => div(JSON.stringify(userstotals[2])));
return {
DOM: dom$,
FETCH: listRequest$
};
}
Cycle.run(main, {
DOM: makeDOMDriver('#main-container'),
FETCH: makeFetchDriver()
});
(My first answer, left for posterity. Notice the two scans.)
const listResponse$ = sources.FETCH
.mergeAll()
.flatMap(res => res.json())
.scan(((userscount, users) => // <-- scan #1
[userscount[0] + 1, users]), [0, []]
)
.share();
const listRequest$ = listResponse$
.filter(function (users) {
return users[1][29] && users[1][29].id &&
maxpages > users[0];
})
.startWith('initial')
.map(users =>
`https://api.github.com/users?since=${
(users[1][users[1].length-1] && users[1][users[1].length-1].id) ||
idstart
}`
);
const dom$ = listResponse$
.scan(function (usersall, users) { // <-- scan #2
usersall.push(users);
return usersall;
}, [])
.map(res => div(JSON.stringify(res)));
By scaning once, up front, I then needed to grab the full page last id, if exists, and store that in the accumulator.