I wrote a quick and dirty test to check the performance of Go vs C# in the area of concurrent lookup access and was surprised by the results.
It's a very trivial example and I'm no Go expert but the test is simply to perform 1,000,000 lock/check/add/unlock operations on a map, it's only single-threaded because I'm checking just these functions:
package main
import (
    "fmt"
    "sync"
    "time"
)
var mu sync.Mutex
func main() {
    cache := make(map[int]int, 1000000)
    start := time.Now()
    for i := 0; i < 1000000; i++ {
        mu.Lock()
        if _, ok := cache[i]; ok == false {
            cache[i] = i
        }
        mu.Unlock()
    }
    end := time.Since(start)
    fmt.Println(end)
    var sum int64
    for _, v := range cache {
        sum += int64(v)
    }
    fmt.Println(sum)
}
And the same thing in C# (via LINQPad):
void Main()
{
    var cache = new Dictionary<int, int>(1000000);
    var sw = Stopwatch.StartNew();
    for (var i = 0; i < 1000000; i++)
    {
        lock (cache)
        {
            int d;
            if (cache.TryGetValue(i, out d) == false)
            {
                cache.Add(i, i);
            }
        }
    }
    $"{sw.ElapsedMilliseconds:N0}ms".Dump();
    var sum = 0L;
    foreach (var kvp in cache)
    {
        sum += kvp.Value;
    }
    sum.Dump();
}
I sum the elements of both collections to ensure they match up (499,999,500,000) and print the time taken. Here are the results:
- C#: 56ms
- Go: 327ms
I've checked that it's not possible to initialise the size of a map, just the capacity, so I'm wondering if there's anything I could do to improve the performance of the Go map?
It takes Go 32ms to perform 1,000,000 lock/unlock operations without the map access.
 
     
     
     
    