I appreciate that LIKE queries are slow as they cannot be indexed. However, I am curious about the performance hit in a situation like this:
Say I have a table like:
user_id  |  message 
-------------------
   1     |  foo bar baz
   1     |  bar buz qux
   .     .      .
   .     .      .
   2     |  bux bar foo
   2     |  bar
where I have say 1 million rows, but 10,000 users, so each user has about 100 messages.
Clearly a search like:
SELECT * FROM table WHERE message like '%ar%';
is going to be very slow. However in my application I would only ever search a user's messages:
SELECT * FROM table WHERE message like '%ar%' AND user_id = 2;
where the user_id column would be indexed.
Am I right in thinking that in a scenario like this, Postgres would only ever perform the slow LIKE query on the users ~100 rows, after using the indexed user_id column, rather than the full table - thus limiting my performance hit?
And also that a query like this wouldn't get significantly slower with 10 or 100 million users, as long as any one user only had ~100 messages?
 
     
    