In MapReduce, you have a Mapper and a Reducer. You also have a Partitioner and a Combiner. 
Hadoop is a distributed file system that partitions(or splits, you might say) the file into blocks of BLOCK SIZE. These partitioned blocks are places on different nodes. So, when a job is submitted to the MapReduce Framework, it divides that job such that there is a Mapper for every input split(for now lets say it is the partitioned block). Since, these blocks are distributed onto different nodes, these Mappers also run on different nodes. 
In the Map stage, 
- The file is divided into records by the RecordReader, the definition of record is controlled byInputFormatthat we choose. Every record is akey-valuepair.
- The map()of ourMapperis run for every such record. The output of this step is again inkey-valuepairs
- The output of our Mapperis partitioned using thePartitionerthat we provide, or the defaultHashPartitioner. Here in this step, by partitioning, I mean deciding whichkeyand its correspondingvaluesgo to whichReducer(if there is only oneReducer, its of no use anyway)
- Optionally, you can also combine/minimize the output that is being sent to the reducer. You can use aCombinerto do that. Note that, the framework does not guarantee the number of times aCombinerwill be called. It is only part of optimization.
This is where your algorithm on the data is usually written. Since these tasks run in parallel, it makes a good candidate for computation intensive tasks.
After all the Mappers complete running on all nodes, the intermediate data i.e the data at  end of Map stage is copied to their corresponding reducer. 
In the Reduce stage, the reduce() of our Reducer is run on each record of data from the Mappers. Here the record comprises of a key and its corresponding values, not necessarily just one value. This is where you generally run your summarization/aggregation logic.
When you write your MapReduce job you usually think about what can be done on each record of data in both the Mapper and Reducer. A MapReduce program can just contain a Mapper with map() implemented and a Reducer with reduce() implemented. This way you can focus more on what you want to do with the data and not bother about parallelizing. You don't have to worry about how the job is split, the framework does that for you. However, you will have to learn about it sooner or later.
I would suggest you to go through Apache's MapReduce tutorial or Yahoo's Hadoop tutorial for a good overview. I personally like yahoo's explanation of Hadoop but Apache's details are good and their explanation using word count program is very nice and intuitive. 
Also, for 
I have a task, which can be separated into several partitions. For
  each partition, I need to run a computing intensive algorithm.
Hadoop distributed file system has data split onto multiple nodes and map reduce framework assigns a task to every every split. So, in hadoop, the process goes and executes where the data resides. You cannot define the number of map tasks to run, data does. You can however, specify/control the number of reduce tasks.
I hope I have comprehensively answered your question.