We have a table of 180m rows, 20 GB in size. Table DDL is:
create table app.table
(
    a_id    integer   not null,
    b_id    integer   not null,
    c_id    integer   not null,
    d_id    integer   not null,
    e_id    integer   not null,
    f_id    integer   not null,
    a_date  timestamp not null,
    date_added          timestamp,
    last_date_modified  timestamp default now()
);
Value distributions:
- a_id has a range of 0-160,000,000
 - b_id has one value (this table is a copy of a single partition of a partitioned table, and this ID happened to be the partition key)
 - c_id has a range of 0-4
 - d_id has one value (currently)
 - e_id has one value (currently)
 
The primary key is a composite key:
alter table app.table add constraint table_pk primary key (a_id, b_id, c_id, d_id, e_id);
We're running a r6g.xlarge cluster in Aurora PostgreSQL v12.8. It's one instance with no other traffic hitting it. We've ran ANALYZE and VACUUM ANALYZE against the table:
INFO:  "table": scanned 30000 of 1711284 pages, containing 3210000 live
 rows and 0 dead rows; 30000 rows in sample, 183107388 estimated total rows
Problem
This query takes 9 seconds to run when shared_buffers is cold (or as cold as we can get it):
select a_id, b_id, c_id, d_id, a_date
from app.table ts
where a_id in ( <5000 values> )
and b_id = 34
and c_id in (2,3)
and d_id = 0
EXPLAIN output:
Index Scan using table_pk on table ts  (cost=0.57..419134.91 rows=237802 width=24) (actual time=8.335..9803.424 rows=5726 loops=1)
"  Index Cond: ((a_id = ANY ('{66986803,90478329,...,121697593}'::integer[])) AND (b_id = 34))"
"  Filter: (c_id = ANY ('{2,3}'::integer[])))"
  Rows Removed by Filter: 3
  Buffers: shared hit=12610 read=10593
  I/O Timings: read=9706.055
Planning:
  Buffers: shared hit=112 read=29
  I/O Timings: read=29.227
Planning Time: 33.437 ms
Execution Time: 9806.271 ms
We think this is unreasonably slow. When the query is ran again, and thus comes from cache, the time it takes is 25 ms. We'd rather not prewarm if possible.
In any case, we'd rather have better performance for this sort of query, around the 1-2 second mark if possible. Any ideas on how we could improve the performance?
EDIT - Effect of adding a covering index:
Tried adding a covering index to include the "a_date":
create unique index covering_idx on app.table (a_id, b_id, c_id, d_id, e_id) include (a_date)
EXPLAIN results after re-running the query (with cold shared_buffers cache):
Index Only Scan using covering_idx on table ts (cost=0.57..28438.58 rows=169286 width=24) (actual time=8.020..7028.442 rows=5658 loops=1)
  Index Cond: ((a_id = ANY ('{134952505,150112033,…,42959574}'::integer[])) AND (b_id = 34))
  Filter: ((e_id = ANY ('{0,0}'::integer[])) AND (c_id = ANY ('{2,3}'::integer[])))
  Rows Removed by Filter: 2
  Heap Fetches: 0
  Buffers: shared hit=12353 read=7733
  I/O Timings: read=6955.935
Planning:
  Buffers: shared hit=80 read=8
  I/O Timings: read=8.458
Planning Time: 11.930 ms
Execution Time: 7031.054 ms
Effect when using Bitmap Heap Scan vs. Index Scan:
We've discovered that we get a speed up when the query is executed using a Bitmap Heap Scan, rather than an Index Scan. We found this by forcing the plan using pg_hint_plan:
When adding /*+ BitmapScan(table) */:
Bitmap Heap Scan on table ts (cost=22912.96..60160.79 rows=9842 width=24) (actual time=3972.237..4063.417 rows=5657 loops=1)
  Recheck Cond: ((a_id = ANY ('{24933126,19612702,27100661,73628268,...,150482461}'::integer[])) AND (b_id = 34))
  Filter: ((d_id = ANY ('{0,0}'::integer[])) AND (c_id = ANY ('{2,3}'::integer[])))
 Rows Removed by Filter: 4
  Heap Blocks: exact=5644
  Buffers: shared hit=14526 read=11136
  I/O Timings: read=22507.527
  ->  Bitmap Index Scan on table_pk (cost=0.00..22898.00 rows=9842 width=0) (actual time=3969.920..3969.920 rows=5661 loops=1)
       Index Cond: ((a_id = ANY ('{24933126,19612702,27100661,,150482461}'::integer[])) AND (b_id = 34))
       Buffers: shared hit=14505 read=5513
       I/O Timings: read=3923.878
Planning:
  Buffers: shared hit=6718
Planning Time: 21.493 ms
{Execution Time: 4066.582 ms
Currently, we are thinking of forcing this plan in production using pg_hint_plan - but we'd rather know why the planner is opting for a less optimal plan! We have run VACUUM ANALYZE with default_statistics_target of 1000.