when I try to update 5000 rows using knexjs i get the error Timeout acquiring a connection. The pool is probably full.".
when I looked at CPU usage. I found the postgres pids are taking always 90-98% CPU usage and this is not normal, I tried by destroy() at every kenx, I do but it destroys the connection and didn't solve it
this is the code I'm using
const knexDb = knex({ client: 'pg', connection: {
    host : '127.0.0.1',
    user : process.env.DB_USER,
    password : process.env.DB_PASSWORD,
    database : process.env.DB_DATABASE,
    port: process.env.DB_PORT
  }});
arrayWith5ThousandObj.map(data => {
    knexDb('users').where({
      user: data.user,
    })
    .update({
      product: data.product
    })
    .catch(err => console.error('update user products', err))
})
this is a loop function that repeats itself every 1 minute, and I tried also .finally -> knexDb.destroy() , but it destroys the connection and I get the error couldn't acquire a connection.
I want to update 5000 rows constantly or more like 10,000+ using knexjs, and I think PostgreSQL can handle this other wise how large website that does like 10s of thousands of queries every min without having a problem. the problem is not in the server as the server has 10 CPU and 16gb of RAM, so resources isn't an issue, I stop all running processes on the server except this app. postgres pid didn't use CPU almost at all. so the problem in a large number of queries that happens. Is there a bulk update that I can update all 10,000+ rows at once using knexjs ?.
I have tried this solution recently
return knexDb.transaction(trx => {
    const queries = [];
    arrayWith5ThousandObj.forEach(data => {
        const query = knexDb('users')
            .where({
              user: data.user,
            })
            .update({
                product: data.product,
            })
            .transacting(trx); // This makes every update be in the same transaction
        queries.push(query);
    });
    Promise.all(queries) // Once every query is written
        .then(trx.commit) // We try to execute all of them
        .catch(trx.rollback); // And rollback in case any of them goes wrong
});
but i get this error:
{ error: deadlock detected
   at Connection.parseE (/*********/connection.js:601:11)
   at Connection.parseMessage (/*********/connection.js:398:19)
   at Socket.<anonymous> (/**********/connection.js:120:22)
   at Socket.emit (events.js:189:13)
   at addChunk (_stream_readable.js:284:12)
   at readableAddChunk (_stream_readable.js:265:11)
   at Socket.Readable.push (_stream_readable.js:220:10)
   at TCP.onStreamRead [as onread] (internal/stream_base_commons.js:94:17)
 name: 'error',
 length: 340,
 severity: 'ERROR',
 code: '40P01',
 detail:
  'Process 9811 waits for ShareLock on transaction 443279355; blocked by process 9808.\nProcess 9808 waits for ShareLock on transaction 443279612; blocked by process 9811.',
 hint: 'See server log for query details.',
 position: undefined,
 internalPosition: undefined,
 internalQuery: undefined,
 where: 'while locking tuple (1799,4) in relation "users"',
 schema: undefined,
 table: undefined,
 column: undefined,
 dataType: undefined,
 constraint: undefined,
 file: 'deadlock.c',
 line: '1140',
 routine: 'DeadLockReport' }
 
     
    