My first suggestion would be to separate memory management from I/O.  First create your matrix, and once you've verified that succeeded, then read your input.  
My second suggestion would be to use regular subscript notation (m[i][j]) instead of explicit pointer dereferences (*(*(m + i) + j)).  Easier to read, harder to get wrong.  
Finally, when you allocate memory this way, you have to allocate memory for each row individually:
void createMatrix( int ***m, size_t length )
{
  *m = malloc( sizeof **m * length );
  if ( !*m )
  {
    // memory allocation error, bail out here
    return;
  }
  size_t i;
  for ( i = 0; i < length; i++ )
  {
    /**
     * Postfix [] has higher precedence than unary *, so *m[i] will be
     * parsed as *(m[i]) - it will index into m and dereference the result.
     * That's not what we want here - we want to index into what m 
     * *points to*, so we have to explicitly group the unary * operator
     * with m and index into the result.
     */
    (*m)[i] = malloc( sizeof *(*m)[i] * length );
    if ( !(*m)[i] )                              
      break;
  }
  if ( i < length )
  {
    // memory allocation failed midway through allocating the rows;
    // free all previously allocated memory before bailing out
    while ( i-- )
      free( (*m)[i] );
    free( *m );
    *m = NULL;
  }
}
Assuming you called it as
int **m;
createMatrix( &m, 3 ); // note & operator on m
you wind up with something that looks like this:
     +---+                                                  +---+
m[0] |   | ---------------------------------------> m[0][0] |   |
     +---+                                +---+             +---+
m[1] |   | ---------------------> m[1][0] |   |     m[0][1] |   |
     +---+               +---+            +---+             +---+
m[2] |   | ----> m[2][0] |   |    m[1][1] |   |     m[0][2] |   |
     +---+               +---+            +---+             +---+
      ...        m[2][1] |   |    m[1][2] |   |              ...
                         +---+            +---+
                 m[2][2] |   |             ...
                         +---+
                          ...
This is not a true 2D array - the rows are not contiguous.  The element immediately following m[0][2] is not m[1][0].  You can index it as though it were a 2D array, but that's about it.
Sometimes that's not the wrong answer. Depending on how fragmented your heap is, a single NxM allocation request may fail, but N separate allocations of M may succeed.  If your algorithm doesn't rely on all elements being contiguous then this will work, although it will likely be slower than using a true NxM array.  
If you want a true, contiguous 2D array whose dimensions are not known until runtime, then you will need a compiler that supports variable-length arrays (VLAs), and you would allocate it as follows:
size_t length = get_length_from( f );
int (*m)[length] = malloc( sizeof *m * length ); // sizeof *m == sizeof (int [length])
m is a pointer to a length-element array of int, and we're allocating enough memory for length such arrays.  Assuming length is 3 again, you wind up with something that looks like this:
        +---+
m[0][0] |   |
        +---+ 
m[0][1] |   |
        +---+ 
m[0][2] |   |
        +---+ 
m[1][0] |   |
        +---+ 
m[1][1] |   |
        +---+ 
m[1][2] |   |
        +---+ 
m[2][0] |   |
        +---+ 
m[2][1] |   |
        +---+ 
m[2][2] |   |
        +---+      
If VLAs are not available (versions earlier than C99, or C2011 where __STDC_NO_VLA__ is defined) and you want all your array elements to be contiguous, then you'll have to allocate it as a 1D array:
size_t length = 3;
int *m = malloc( sizeof *m * length * length );
You can set up an additional array of pointers to pretend m is a 2D array:
int *q[] = { &m[0], &m[3], &m[6] };
So
q[0][0] == m[0];
q[0][1] == m[1];
q[0][2] == m[2];
q[1][0] == m[3];
q[1][1] == m[4];
...
etc.