问题描述
编辑:pTimes 现在在每个基准测试之间重置为零,但结果变得更奇怪!
最终目标是摆弄我自己的自定义内存管理方案,我为 Visual Community 2019、Windows 10 中现有的 malloc() 做了一个简单的基准测试。出于兴趣,我对 cpu 时间和挂机时间进行了基准测试,我通过在许多块中分配一大块内存来测试 malloc,然后单独释放每个块而不使用它们。看这里:
void malloc_distr256(int nMemsize) {
long long pFreeList[256];
for (int i = 0; i < 256; ++i) pFreeList[i] = malloc(nMemsize >> 8);
for (int i = 0; i < 256; ++i) free((void*)pFreeList[i]);
}
void malloc_distr64(int nMemsize) {
long long pFreeList[64];
for (int i = 0; i < 64; ++i) pFreeList[i] = malloc(nMemsize >> 6);
for (int i = 0; i < 64; ++i) free((void*)pFreeList[i]);
}
void malloc_distr0(int nMemsize) {
void* pMem = malloc(nMemsize);
free(pMem);
}
我使用以下代码对这些函数进行了基准测试 - “BenchTimes”只是一个保持双 cpu/wall 时间的结构:
inline double cputime() {
FILETIME lpCreationTime;
FILETIME lpExitTime;
FILETIME lpKernelTime;
FILETIME lpUserTime;
if (GetProcesstimes(GetCurrentProcess(),&lpCreationTime,&lpExitTime,&lpKernelTime,&lpUserTime)) {
double dUnits = (double)(lpUserTime.dwLowDateTime | (long long)lpUserTime.dwHighDateTime << 32);
return dUnits * 0.1;
}
else return 0xFFF0000000000000;
}
inline double walltime() {
LARGE_INTEGER lnFreq,lnTime;
if (QueryPerformanceFrequency(&lnFreq)) if (QueryPerformanceCounter(&lnTime))
return 1000000.0 * (double)lnTime.QuadPart / (double)lnFreq.QuadPart;
//multiply by 1,000,000 to convert seconds to microseconds
//because the cpu time measurer I had in microseconds as well
return 0.0;
}
void bench(void (pfnFunc)(int),int nMemsize,int nIters,int nReps,BenchTimes* pTimes) {
pTimes->dcpuTime = 0.0;
pTimes->dWallTime = 0.0;
for (volatile int r = 0; r < nReps; ++r) {
double dcpuStart = cputime();
double dWallStart = walltime();
for (volatile int i = 0; i < nIters; ++i) pfnFunc(nMemsize);
double dcpuEnd = cputime();
double dWallEnd = walltime();
double dcpuDiff = dcpuEnd - dcpuStart;
double dWallDiff = dWallEnd - dWallStart;
pTimes->dcpuTime += dcpuDiff;
pTimes->dWallTime += dWallDiff;
}
}
这些是在我的电脑 (i5-9400f) 上测量的时间,以秒为单位。
我很好奇性能和 Wall time 与 cpu 时间比较的巨大差异!
BenchTimes sTimes;
bench(malloc_distr256,1 << 20,100,1000,&sTimes);
fprintf(stdout,"Malloc alloc/free bench allocated %lf megabytes,distributed over 256 chunks\n",(double)(1 << 20) / 1000000);
fprintf(stdout,"Malloc alloc/free bench returned:\nWalltime - total: %lf\ncpu Time - total: %lf\n",sTimes.dWallTime / 1000000,sTimes.dcpuTime / 1000000);
bench(malloc_distr64,"\nMalloc alloc/free bench allocated %lf megabytes,distributed over 64 chunks\n","\nMalloc alloc/free bench returned:\nWalltime - total: %lf\ncpu Time - total: %lf\n",sTimes.dcpuTime / 1000000);
bench(malloc_distr0,distributed over no chunks\n",sTimes.dcpuTime / 1000000);
system("pause");
解决方法
malloc
是通过 HeapAlloc
实现的,它在名为 RtlAllocateHeap
的系统函数中实现。
该函数管理堆。它通过 VirtualAlloc[Ex]
或其等效项分配系统内存页面,并在这些页面内提供较小的分配。
对于较大的分配,每次分配时都会调用 VirtualAlloc[Ex]
等价物,较小的分配偶尔会调用它。
VirtualAlloc[Ex]
是使用内核调用 NtAllocateVirtualMemory
实现的。花在其中的大部分时间都不算在lpUserTime
中。
另一方面,QueryPerformanceCounter
是诚实的总时间。