如何从ls像输出日志中总结字节大小,KiB,MiB,GiB的文件大小

问题描述

我有一个预先计算的ls类似输出(它不是来自实际的ls命令),我无法对其进行修改或重新计算。看起来如下:

2016-10-14 14:52:09    0 Bytes folder/
2020-04-18 05:19:04  201 Bytes folder/file1.txt
2019-10-16 00:32:44  201 Bytes folder/file2.txt
2019-08-26 06:29:46  201 Bytes folder/file3.txt
2020-07-08 16:13:56  411 Bytes folder/file4.txt
2020-04-18 03:03:34  201 Bytes folder/file5.txt
2019-10-16 08:27:11    1.1 KiB folder/file6.txt
2019-10-16 10:13:52  201 Bytes folder/file7.txt
2019-10-16 08:44:35  920 Bytes folder/file8.txt
2019-02-17 14:43:10  590 Bytes folder/file9.txt

日志至少可以包含GiBMiBKiBBytes。可能的值中有零个值,或者每个前缀没有逗号的值:

0   Bytes
3.9 KiB
201 Bytes
2.0 KiB
2.7 MiB
1.3 GiB

以下是类似的方法

awk 'BEGIN{ pref[1]="K";  pref[2]="M"; pref[3]="G";} { total = total + $1; x = $1; y = 1; while( x  > 1024 ) { x = (x + 1023)/1024; y++; }  printf("%g%s\t%s\n",int(x*10)/10,pref[y],$2); } END { y = 1; while(  total > 1024 ) { total = (total + 1023)/1024; y++; } printf("Total:  %g%s\n",int(total*10)/10,pref[y]); }'

但在我的情况下无法正常工作

$ head -n 10 files_sizes.log | awk '{print $3,$4}' | sort | awk 'BEGIN{ pref[1]="K";  pref[2]="M"; pref[3]="G";} { total = total + $1; x = $1; y = 1; while( x  > 1024 ) { x = (x + 1023)/1024; y++; }  printf("%g%s\t%s\n",pref[y]); }'


0K  Bytes
1.1K    KiB
201K    Bytes
201K    Bytes
201K    Bytes
201K    Bytes
201K    Bytes
411K    Bytes
590K    Bytes
920K    Bytes
Total:  3.8M

输出错误地计算了大小。我期望的输出是输入日志文件的正确总和:

0 Bytes
201 Bytes
201 Bytes
201 Bytes
411 Bytes
201 Bytes
1.1 KiB
201 Bytes
920 Bytes
590 Bytes
Total:  3.95742 KiB

注意

Bytes之和的正确值是 201 * 5 + 590 + 920 = 2926,因此添加KiB的总数为 2.857422 + 1.1 = 3,95742 KiB = 4052.400字节

[UPDATE]

我更新了KamilCukTed LyngmoWalter A解决方案的结果比较,得出的值几乎相同:

$ head -n 10 files_sizes.log | tr -s ' ' | cut -d' ' -f3,4 | sed 's/ //; s/Bytes//; s/B//' | gnumfmt --from=auto | awk '{s+=$1}END{print s " Bytes"}'
117538 Bytes
$ head -n 1000 files_sizes.log | tr -s ' ' | cut -d' ' -f3,4 | sed 's/ //; s/Bytes//; s/B//' | gnumfmt --from=auto | awk '{s+=$1}END{print s " Bytes"}'
1225857 Bytes
$ head -n 10000 files_sizes.log | tr -s ' ' | cut -d' ' -f3,4 | sed 's/ //; s/Bytes//; s/B//' | gnumfmt --from=auto | awk '{s+=$1}END{print s " Bytes"}'
12087518 Bytes
$ head -n 1000000 files_sizes.log | tr -s ' ' | cut -d' ' -f3,4 | sed 's/ //; s/Bytes//; s/B//' | gnumfmt --from=auto | awk '{s+=$1}END{print s " Bytes"}'
77238840381 Bytes
$ head -n 100000000 files_sizes.log | tr -s ' ' | cut -d' ' -f3,4 | sed 's/ //; s/Bytes//; s/B//' | gnumfmt --from=auto | awk '{s+=$1}END{print s " Bytes"}'
2306569381835 Bytes

$ head -n 10 files_sizes.log | ./count_files.sh
3.957422 KiB
$ head -n 1000 files_sizes.log | ./count_files.sh
1.168946 MiB
$ head -n 10000 files_sizes.log | ./count_files.sh
11.526325 MiB
$ head -n 1000000 files_sizes.log | ./count_files.sh
71.934024 GiB
$ head -n 100000000 files_sizes.log | ./count_files.sh
2.097807 TiB

(head -n 100000000 files_sizes.log | tr -s ' ' | cut -d' ' -f3,4 | sed 's/Bytes//; s/KiB/* 1024/; s/MiB/* 1024 * 1024/;s/GiB/* 1024 * 1024 * 1024/; s/$/ + /; $s/+ //' | tr -d '\n' ; echo) | bc
2306563692898.8

其中

2.097807 TiB = 2.3065631893 TB = 2306569381835字节

通过计算,我比较了速度的所有三种解决方案:

$ time head -n 100000000 files_sizes.log | ./count_files.sh
2.097807 TiB

real    2m7.956s
user    2m10.023s
sys 0m1.696s

$ time head -n 100000000 files_sizes.log | tr -s ' ' | cut -d' ' -f3,4 | sed 's/ //; s/Bytes//; s/B//' | gnumfmt --from=auto | awk '{s+=$1}END{print s " Bytes"}'
2306569381835 Bytes

real    4m12.896s
user    5m45.750s
sys 0m4.026s

$ time (head -n 100000000 files_sizes.log | tr -s ' ' | cut -d' ' -f3,4 | sed 's/Bytes//; s/KiB/* 1024/; s/MiB/* 1024 * 1024/;s/GiB/* 1024 * 1024 * 1024/; s/$/ + /; $s/+ //' | tr -d '\n' ; echo) | bc
2306563692898.8

real    4m31.249s
user    6m40.072s
sys 0m4.252s

解决方法

使用numfmt转换这些数字。

cat <<EOF |
2016-10-14 14:52:09    0 Bytes folder/
2020-04-18 05:19:04  201 Bytes folder/file1.txt
2019-10-16 00:32:44  201 Bytes folder/file2.txt
2019-08-26 06:29:46  201 Bytes folder/file3.txt
2020-07-08 16:13:56  411 Bytes folder/file4.txt
2020-04-18 03:03:34  201 Bytes folder/file5.txt
2019-10-16 08:27:11    1.1 KiB folder/file6.txt
2019-10-16 10:13:52  201 Bytes folder/file7.txt
2019-10-16 08:44:35  920 Bytes folder/file8.txt
2019-02-17 14:43:10  590 Bytes folder/file9.txt
2019-02-17 14:43:10  3.9 KiB  folder/file9.txt
2019-02-17 14:43:10  2.7 MiB folder/file9.txt
2019-02-17 14:43:10  1.3 GiB folder/file9.txt
EOF
# extract 3rd and 4th column
tr -s ' ' | cut -d' ' -f3,4 |
# Remove space,remove "Bytes",remove "B"
sed 's/ //; s/Bytes//; s/B//' |
# convert to numbers
numfmt --from=auto |
# sum
awk '{s+=$1}END{print s}'

输出总和。

,

对于上述输入:

2016-10-14 14:52:09    0 Bytes folder/
2020-04-18 05:19:04  201 Bytes folder/file1.txt
2019-10-16 00:32:44  201 Bytes folder/file2.txt
2019-08-26 06:29:46  201 Bytes folder/file3.txt
2020-07-08 16:13:56  411 Bytes folder/file4.txt
2020-04-18 03:03:34  201 Bytes folder/file5.txt
2019-10-16 08:27:11    1.1 KiB folder/file6.txt
2019-10-16 10:13:52  201 Bytes folder/file7.txt
2019-10-16 08:44:35  920 Bytes folder/file8.txt
2019-02-17 14:43:10  590 Bytes folder/file9.txt

您可以使用要解码的单位表:

BEGIN {
    unit["Bytes"] = 1;

    unit["kB"] = 10**3;
    unit["MB"] = 10**6;
    unit["GB"] = 10**9;
    unit["TB"] = 10**12;
    unit["PB"] = 10**15;
    unit["EB"] = 10**18;
    unit["ZB"] = 10**21;
    unit["YB"] = 10**24;

    unit["KB"] = 1024;
    unit["KiB"] = 1024**1;
    unit["MiB"] = 1024**2;
    unit["GiB"] = 1024**3;
    unit["TiB"] = 1024**4;
    unit["PiB"] = 1024**5;
    unit["EiB"] = 1024**6;
    unit["ZiB"] = 1024**7;
    unit["YiB"] = 1024**8;
}

然后在主循环中总结一下:

{
    if($4 in unit) total += $3 * unit[$4];
    else printf("ERROR: Can't decode unit at line %d: %s\n",NR,$0);
}

并在最后打印结果:

END {
    binaryunits[0] = "Bytes";
    binaryunits[1] = "KiB";
    binaryunits[2] = "MiB";
    binaryunits[3] = "GiB";
    binaryunits[4] = "TiB";
    binaryunits[5] = "PiB";
    binaryunits[6] = "EiB";
    binaryunits[7] = "ZiB";
    binaryunits[8] = "YiB";
    for(i = 8;; --i) {
         if(total >= 1024**i || i == 0) {
            printf("%.3f %s\n",total/(1024**i),binaryunits[i]);
            break;
        }
    }
}

输出:

3.957 KiB

请注意,您可以在awk脚本的开头添加一个She-bang,以使其可以独立运行,而不必将其放在脚本中:>

#!/usr/bin/awk -f
,

您可以先解析输入,然后再将其发送到bc

echo "0   Bytes
3.9 KiB
201 Bytes
2.0 KiB
2.7 MiB
1.3 GiB" |
   sed 's/Bytes//; s/KiB/* 1024/; s/MiB/* 1024 * 1024/; 
        s/GiB/* 1024 * 1024 * 1024/; s/$/ + /'  |
   tr -d '\n' | 
   sed 's/+ $/\n/' |
   bc

当您的sed不支持\n时,您可以尝试用真实的换行符代替'\ n',例如

sed 's/+ $/
/'

或在解析后添加echo(并将最后一个sed的一部分移到第一个命令中,以删除最后一个+

(echo "0   Bytes
3.9 KiB
201 Bytes
2.0 KiB
2.7 MiB
1.3 GiB" | sed 's/Bytes//; s/KiB/* 1024/; s/MiB/* 1024 * 1024/;
s/GiB/* 1024 * 1024 * 1024/; s/$/ + /; $s/+ //'  | tr -d '\n' ; echo) | bc
,

从@KamilCuk到make use of numfmt的好主意。根据他的回答,这是一个替代命令,该命令使用单个awk调用,将numfmttwo-way pipe包装在一起。它需要GNU awk的最新版本(5.0.1可以使用,4.1.4可以使用,但未经过测试)。

LC_NUMERIC=C gawk '
    BEGIN {
        conv = "numfmt --from=auto"
        PROCINFO[conv,"pty"] = 1
    }
    {
        sub(/B.*/,"",$4)
        print $3 $4 |& conv
        conv |& getline val
        sum += val
    }
    END { print sum }
' input

注释

  • LC_NUMERIC=C(bash / ksh / zsh)用于在使用非英语语言环境的系统上的可移植性。
  • PROCINFO[conv,"pty"] = 1numfmt的输出在每一行上都被刷新(以避免出现Dealock)。
,

让我为您提供一种使用ls的更好方法:不要将其用作命令,而应将其用作find开关:

find . -maxdepth 1 -ls

这将返回统一的文件大小,如find的联机帮助页所述,这使得进行计算变得容易得多。

祝你好运