I am working on a project (Rails 3.0.15, ruby 1.9.3-p125-perf) where the db is in localhost and users table has a bit more than 100K records.
Using 
order by RAND()
is quite slow
User.order("RAND(id)").first
becomes
SELECT users.* FROM users ORDER BY RAND(id) LIMIT 1
and takes from 8 to 12 seconds to respond!!
Rails log:
User Load (11030.8ms)  SELECT users.* FROM users ORDER BY RAND()
  LIMIT 1
from mysql's explain 
+----+-------------+-------+------+---------------+------+---------+------+--------+---------------------------------+
| id | select_type | table | type | possible_keys | key  | key_len | ref  | rows   | Extra                           |
+----+-------------+-------+------+---------------+------+---------+------+--------+---------------------------------+
|  1 | SIMPLE      | users | ALL  | NULL          | NULL | NULL    | NULL | 110165 | Using temporary; Using filesort |
+----+-------------+-------+------+---------------+------+---------+------+--------+---------------------------------+
You can see that no index is used (possible_keys = NULL), a temporary table is created and an extra pass is required to fetch the desired value (extra = Using temporary; Using filesort).
On the other hand, by splitting the query in two parts and using Ruby, we have a reasonable improvement in response time.
users = User.scoped.select(:id);nil
User.find( users.first( Random.rand( users.length )).last )
(;nil for console use)
Rails log:
User Load (25.2ms)  SELECT id FROM users User Load (0.2ms)  SELECT
  users.* FROM users WHERE users.id = 106854 LIMIT 1
and mysql's explain proves why:
+----+-------------+-------+-------+---------------+--------------------------+---------+------+--------+-------------+
| id | select_type | table | type  | possible_keys | key                      | key_len | ref  | rows   | Extra       |
+----+-------------+-------+-------+---------------+--------------------------+---------+------+--------+-------------+
|  1 | SIMPLE      | users | index | NULL          | index_users_on_user_type | 2       | NULL | 110165 | Using index |
+----+-------------+-------+-------+---------------+--------------------------+---------+------+--------+-------------+
+----+-------------+-------+-------+---------------+---------+---------+-------+------+-------+
| id | select_type | table | type  | possible_keys | key     | key_len | ref   | rows | Extra |
+----+-------------+-------+-------+---------------+---------+---------+-------+------+-------+
|  1 | SIMPLE      | users | const | PRIMARY       | PRIMARY | 4       | const |    1 |       |
+----+-------------+-------+-------+---------------+---------+---------+-------+------+-------+
we can now use only indexes and the primary key and do the job about 500 times faster!
UPDATE:
as pointed out by icantbecool in comments the above solution has a flaw if there are deleted records in the table.
A workaround in that can be
users_count = User.count
User.scoped.limit(1).offset(rand(users_count)).first
which translates to two queries
SELECT COUNT(*) FROM `users`
SELECT `users`.* FROM `users` LIMIT 1 OFFSET 148794
and runs in about 500ms.