I am new to Cassandra and running User-defined Aggregate on a Cassandra 3-node cluster on local machine. Issue is that when i am running this aggregate on a smaller data set, result is fine and as expected.
But when data is too large, query fails with error -
OperationTimedOut: errors={'127.0.0.1': 'Client request timeout. See Session.exe cute_async'}, last_host=127.0.0.1
I found bellow questions similar to my issue but those are not answered. Find link to Other questions -
How to set a timeout and throttling rate for a large user defined aggregate query
Cassandra CQLSH OperationTimedOut error=Client request timeout. See Session.execute[_async](timeout)
I have modified cassandra.yaml and time limits are -
    read_request_timeout_in_ms: 555000
    range_request_timeout_in_ms: 10000
    write_request_timeout_in_ms: 2000
    counter_write_request_timeout_in_ms: 5000
    cas_contention_timeout_in_ms: 1000
    truncate_request_timeout_in_ms: 60000
    request_timeout_in_ms: 10000
But this did not help me. Please guide what is the correct configuration for these timings in order to run the same query on large data set without query-timeout.
Aggregate code -
    CREATE FUNCTION countSessions(datamap map<text,int>,host text) 
    RETURNS NULL ON NULL INPUT 
    RETURNS map<text, int> 
    LANGUAGE java as 
    '
    Integer countValue = (Integer)datamap.get(host);
    if(countValue == null) {
    countValue = 1;
    } else {
    countValue++;
    } datamap.put(host,countValue);
    return datamap;
    ';
    CREATE OR REPLACE AGGREGATE hostaggregate(text) 
    SFUNC countSessions 
    STYPE map<text, int> 
    INITCOND {};
Thanks and regards,
Vibhav
PS - If anybody chooses to down-vote this question, please do mention the reason for the same in comments.
 
    