Tuesday, January 4, 2022

[SOLVED] Cache line alignment optimization not reducing cache miss

Issue

I got this piece of code demonstrating how cache line alignment optimization works by reducing 'false sharing' from http://blog.kongfy.com/2016/10/cache-coherence-sequential-consistency-and-memory-barrier/

Code:

/*
 * Demo program for showing the drawback of "false sharing"
 *
 * Use it with perf!
 *
 * Compile: g++ -O2 -o false_share false_share.cpp -lpthread
 * Usage: perf stat -e cache-misses ./false_share <loopcount> <is_aligned>
 */

#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <sys/time.h>
#include <sys/resource.h>

#define CACHE_ALIGN_SIZE 64
#define CACHE_ALIGNED __attribute__((aligned(CACHE_ALIGN_SIZE)))

int gLoopCount;

inline int64_t current_time()
{
  struct timeval t;
  if (gettimeofday(&t, NULL) < 0) {
  }
  return (static_cast<int64_t>(t.tv_sec) * static_cast<int64_t>(1000000) + static_cast<int64_t>(t.tv_usec));
}

struct value {
  int64_t val;
};
value data[2] CACHE_ALIGNED;

struct aligned_value {
  int64_t val;
} CACHE_ALIGNED;
aligned_value aligned_data[2] CACHE_ALIGNED;

void* worker1(int64_t *val)
{
  printf("worker1 start...\n");

  volatile int64_t &v = *val;
  for (int i = 0; i < gLoopCount; ++i) {
    v += 1;
  }

  printf("worker1 exit...\n");
}

// duplicate worker function for perf report
void* worker2(int64_t *val)
{
  printf("worker2 start...\n");

  volatile int64_t &v = *val;
  for (int i = 0; i < gLoopCount; ++i) {
    v += 1;
  }

  printf("worker2 exit...\n");
}

int main(int argc, char *argv[])
{
  pthread_t race_thread_1;
  pthread_t race_thread_2;

  bool is_aligned;

  /* Check arguments to program*/
  if(argc != 3) {
    fprintf(stderr, "USAGE: %s <loopcount> <is_aligned>\n", argv[0]);
    exit(1);
  }

  /* Parse argument */
  gLoopCount = atoi(argv[1]); /* Don't bother with format checking */
  is_aligned = atoi(argv[2]); /* Don't bother with format checking */

  printf("size of unaligned data : %d\n", sizeof(data));
  printf("size of aligned data   : %d\n", sizeof(aligned_data));

  void *val_0, *val_1;
  if (is_aligned) {
    val_0 = (void *)&aligned_data[0].val;
    val_1 = (void *)&aligned_data[1].val;
  } else {
    val_0 = (void *)&data[0].val;
    val_1 = (void *)&data[1].val;
  }

  int64_t start_time = current_time();

  /* Start the threads */
  pthread_create(&race_thread_1, NULL, (void* (*)(void*))worker1, val_0);
  pthread_create(&race_thread_2, NULL, (void* (*)(void*))worker2, val_1);

  /* Wait for the threads to end */
  pthread_join(race_thread_1, NULL);
  pthread_join(race_thread_2, NULL);

  int64_t end_time = current_time();

  printf("time : %d us\n", end_time - start_time);

  return 0;
}

Expected perf result:

[jingyan.kfy@OceanBase224006 work]$ perf stat -e cache-misses ./false_share 100000000 0
size of unaligned data : 16
size of aligned data   : 128
worker2 start...
worker1 start...
worker1 exit...
worker2 exit...
time : 452451 us

 Performance counter stats for './false_share 100000000 0':

         3,105,245 cache-misses

       0.455033803 seconds time elapsed

[jingyan.kfy@OceanBase224006 work]$ perf stat -e cache-misses ./false_share 100000000 1
size of unaligned data : 16
size of aligned data   : 128
worker1 start...
worker2 start...
worker1 exit...
worker2 exit...
time : 326994 us

 Performance counter stats for './false_share 100000000 1':

            27,735 cache-misses

       0.329737667 seconds time elapsed

However, I ran the code myself and got very close run time, the cache miss count is even lower when NOT ALIGNED:

My result:

$ perf stat -e cache-misses ./false_share 100000000 0
size of unaligned data : 16
size of aligned data   : 128
worker1 start...
worker2 start...
worker2 exit...
worker1 exit...
time : 169465 us

 Performance counter stats for './false_share 100000000 0':

            37,698      cache-misses:u                                              

       0.171625603 seconds time elapsed

       0.334919000 seconds user
       0.001988000 seconds sys


$ perf stat -e cache-misses ./false_share 100000000 1
size of unaligned data : 16
size of aligned data   : 128
worker2 start...
worker1 start...
worker2 exit...
worker1 exit...
time : 118798 us

 Performance counter stats for './false_share 100000000 1':

            38,375      cache-misses:u                                              

       0.121072715 seconds time elapsed

       0.230043000 seconds user
       0.001973000 seconds sys

How should I understand this inconsistency?


Solution

It's hard to help since the blog you reference to is in Chinese. Still, I've noticed that the first figure seems to show a multi-socket architecture. So I made a few experiments.

a) my PC, Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz, single socket, two cores, two threeds per core:

0:

time : 195389 us

 Performance counter stats for './a.out 100000000 0':

             8 980      cache-misses:u                                              

       0,198584628 seconds time elapsed

       0,391694000 seconds user
       0,000000000 seconds sys

and 1:

time : 191413 us

 Performance counter stats for './a.out 100000000 1':

             9 020      cache-misses:u                                              

       0,192953853 seconds time elapsed

       0,378434000 seconds user
       0,000000000 seconds sys

Not much difference.

b) Now a 2-socket workstation

Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
NUMA node(s): 2
Model name: Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz

0:

time : 454679 us

 Performance counter stats for './a.out 100000000 0':

         5,644,133      cache-misses                                                

       0.456665966 seconds time elapsed

       0.738173000 seconds user

1:

time : 346871 us

 Performance counter stats for './a.out 100000000 1':

            42,217      cache-misses                                                

       0.348814583 seconds time elapsed

       0.539676000 seconds user
       0.000000000 seconds sys

The difference is huge.


One final remark. You write:

the cache miss count is even lower when NOT ALIGNED

No, it isn't. Your processor is running various tasks besides your program. Also, you're running 2 threads that may access the cache at different time sequences. All this may influence cache utilization. You'd need to repeat your measurements several times and compare. Personally, when I see any performance results differing by less than 10%, I consider them indistinguishable.


Update

I've also made experiments with your code extended to 3 threads so that certainly some of them MUST be running on different cores, hence, sharing only the L3 cache.

I looked at How to catch the L3-cache hits and misses by perf tool in Linux and came with this command:

 perf stat -e cache-misses,cache-references,LLC-loads,LLC-stores,L1-dcache-load-misses,L1-dcache-prefetch-misses,L1-dcache-store-misses ./a.out 100000000 0

0:

time : 214253 us

 Performance counter stats for './a.out 100000000 0':

             4 765      cache-misses:u            #    0,018 % of all cache refs      (57,39%)
        25 992 887      cache-references:u                                            (57,56%)
        17 430 736      LLC-loads:u                                                   (57,56%)
         8 591 378      LLC-stores:u                                                  (57,56%)
        28 110 342      L1-dcache-load-misses:u                                       (57,40%)
        14 661 378      L1-dcache-prefetch-misses:u                                     (57,80%)
            32 269      L1-dcache-store-misses:u                                      (57,49%)

       0,215484922 seconds time elapsed

       0,627426000 seconds user
       0,006635000 seconds sys

1:

time : 194253 us

 Performance counter stats for './a.out 100000000 1':

             4 509      cache-misses:u            #   30,715 % of all cache refs      (57,15%)
            14 680      cache-references:u                                            (57,45%)
             7 954      LLC-loads:u                                                   (57,49%)
             1 565      LLC-stores:u                                                  (57,92%)
             4 442      L1-dcache-load-misses:u                                       (57,91%)
               836      L1-dcache-prefetch-misses:u                                     (57,02%)
               984      L1-dcache-store-misses:u                                      (56,85%)

       0,195145645 seconds time elapsed

       0,569986000 seconds user
       0,000000000 seconds sys

Thus:

  • the aligned (3-thread) version runs systematically (a bit) faster than unaligned (I repeated the test several times) even in a single-socket machine.
  • it's not quite clear what "cache-misses" option actually reports
  • there's a huge (numerical) penalty for "false data sharing" in the L1 cache, LLC cache and in the number of cache references.
  • remember that these are hardware-based statistics: if other processes are running, they add their contribution to these results


Answered By - zkoza