O() notation is calculated on how many instructions will execute in worst case. Suppose you have a program where you have this following loop - 
for(int i = 0; i < n; i++)
{
  //some operation
}
So this loop will run n times. So the time complexity will be O(n).
Now suppose your program has two for loop - 
for(int i = 0; i < n; i++)
{
  //some operation
}
for(int i = 0; i < n; i++)
{
  //some operation
}
So, the time complexity will be O(n+n) or O(2n). But in asymptotic notation or O() notation, we generally ignore any constant part. So we will simply say the time complexity is O(n).
Now, diving down a bit deeper. Suppose you have the following program - 
for(int i = 0; i < n; i++)
{
  for(int i = 0; i < n; i++)
  {
    //some operation
  }
}
So, let's calculate how many instruction will run ultimately. It's n*n or n^2. So the time complexity will be O(n^2). Similarly if there are three or more nested loop, we will simply calculate how many instructions will run ultimately.
Now what O(1) is? - you should understand by now - if the total number of instruction is 1 or any other constant number, it's O(1).
What about O(logn) - take binary search for example. In binary search, we are cutting our array into two half. So mathematically the maximum time a binary search can run over an array of length is logn. So our time complexity is O(logn)
These are some basic techniques. Obviously there are a lot many variations. But the gist is - how many operation/instruction will run in worst case.