Why it doesn't work
This:
 - name: Testing DB to make sure it is available
   command: psql -U bob image
   register: b
 - debug: b
 - name: Verifying Tables exist in Image
   shell: \d image
   register: c
 - debug: c
doesn't do what you think it does.
The first command runs psql -U bob image. This starts a psql session. psql waits for input from stdin. Ansible will never send any, it is simply waiting for the command you specified to exit, so it can check the exit code.
So Ansible waits for psql to exit, and psql waits for Ansible to send some input.
Each task in Ansible is independent. The shell or command modules do not change the shell that subsequent commands run in. You simply can't do this the way you expect.
Even if psql exited after the first task (or went to the background), you'd just get an error from the second task like:
bash: d: command not found
So the way you're trying to do this just isn't going to work.
How to do it
You need to run each task as a separate psql command, with a command string:
 - name: Testing DB to make sure it is available
   command: psql -U bob image -c 'SELECT 1;'
 - name: Verifying Tables exist in Image
   command: psql -U bob image -c '\d image'
... or with standard input, except that Ansible doesn't seem to support supplying a variable as stdin to a command.
... or with a (possibly templated) SQL script:
- name: Template sql script
  template:  src="my.sql.j2" dest="{{sometemplocation}}/my.sql"
- name: Execute sql script
  shell: "psql {{sometemplocation}}/my.sql"
- name: Delete sql script
  file: path="{{sometemplocation}}/my.sql" state=absent
Alternately you can use Ansible's built-in support for querying PostgreSQL to do it, but in that case you cannot use the psql client's backslash commands like \d, you'd have to use only SQL. Query information_schema for table info, etc.
Here's how some of my code looks
Here's an example from an automation module I wrote that does a lot with PostgreSQL.
Really, I should just suck it up and write a psql Ansible task that runs commands via psql, rather than using shell, which is awful and clumsy. For now, though, it works. I use connection strings that're assigned from variables or generated using set_fact to reduce the mess a bit and make connections more flexible.
- name: Wait for the target node to be ready to be joined
  shell: "{{postgres_install_dir}}/bin/psql '{{bdr_join_target_dsn}}' -qAtw 'SELECT bdr.bdr_node_join_wait_for_ready();'"
- name: Template pre-BDR-join SQL script
  template:  src="{{bdr_pre_join_sql_template}}" dest="{{postgres_install_dir}}/bdr_pre_join_{{inventory_hostname}}.sql"
- name: Execute pre-BDR-join SQL script
  shell: "{{postgres_install_dir}}/bin/psql '{{bdr_node_dsn}}' -qAtw -f {{postgres_install_dir}}/bdr_pre_join_{{inventory_hostname}}.sql"
- name: Delete pre-BDR-join SQL script
  file: path="{{postgres_install_dir}}/bdr_pre_join_{{inventory_hostname}}.sql" state=absent
- name: bdr_group_join
  shell: "{{postgres_install_dir}}/bin/psql '{{bdr_node_dsn}}' -qAtw -c \"SELECT bdr.bdr_group_join(local_node_name := '{{inventory_hostname}}', node_external_dsn := '{{bdr_node_dsn}}', join_using_dsn := '{{bdr_join_target_dsn}}');\""
- name: Template post-BDR-join SQL script
  template:  src="{{bdr_post_join_sql_template}}" dest="{{postgres_install_dir}}/bdr_post_join_{{inventory_hostname}}.sql"
- name: Execute post-BDR-join SQL script
  shell: "{{postgres_install_dir}}/bin/psql '{{bdr_node_dsn}}' -qAtw -f {{postgres_install_dir}}/bdr_post_join_{{inventory_hostname}}.sql"
- name: Delete post-BDR-join SQL script
  file: path="{{postgres_install_dir}}/bdr_post_join_{{inventory_hostname}}.sql" state=absent