Linux技巧:一次删除一百万个文件的最快方法

最初的测评

昨天,我看到一个非常有趣的删除一个目录下的海量文件的方法。这个方法来自http://www.quora.com/How-can-someone-rapidly-delete-400-000-files里的Zhenyu Lee。

他没有使用findxargs,他很有创意的利用了rsync的强大功能,使用rsync –delete将目标文件夹以一个空文件夹来替换。之后,我做了一个实验来比较各种方法。让我吃惊的是,Lee的方法要比其它的快的多。下面就是我的测评。

环境:

  • CPU: Intel(R) Core(TM)2 Duo CPU E8400 @ 3.00GHz
  • MEM: 4G
  • HD: ST3250318AS: 250G/7200RPM
Method # Of Files Deletion Time
rsync -a –delete empty/ s1/ 1000000 6m50.638s
find s2/ -type f -delete 1000000 87m38.826s
find s3/ -type f | xargs -L 100 rm 1000000 83m36.851s
find s4/ -type f | xargs -L 100 -P 100 rm 1000000 78m4.658s
rm -rf s5 1000000 80m33.434s

使用 –delete–exclude,你可以选择性删除符合条件的文件。还有一点,当你需要保留这个目录做其它用处时,这种方法是再适合不过了。

重新测评

几天前,Keith-Winstein在回复Quora上的这个帖子时说我之前的测评无法复制,因为操作的时间持续的太久。我澄清一下,这些数据过大,可能是因为我的计算机在过去的几年里做的事太多,测评中可能存在一些文件系统错误。但我不确定是这些原因。现在好了,我弄了一天比较新的计算机,把测评再做一次。这次我使用/usr/bin/time,它能提供更详细的信息。下面就是新的结果。

(每次都是1000000个文件。每个文件的体积都是0。)

Command Elapsed System Time %CPU cs (Vol/Invol)
rsync -a –delete empty/ a 10.60 1.31 95 106/22
find b/ -type f -delete 28.51 14.46 52 14849/11
find c/ -type f | xargs -L 100 rm 41.69 20.60 54 37048/15074
find d/ -type f | xargs -L 100 -P 100 rm 34.32 27.82 89 929897/21720
rm -rf f 31.29 14.80 47 15134/11

原始输出

# method 1
~/test $ /usr/bin/time -v  rsync -a --delete empty/ a/
        Command being timed: "rsync -a --delete empty/ a/"
        User time (seconds): 1.31
        System time (seconds): 10.60
        Percent of CPU this job got: 95%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 0:12.42
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 0
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 0
        Minor (reclaiming a frame) page faults: 24378
        Voluntary context switches: 106
        Involuntary context switches: 22
        Swaps: 0
        File system inputs: 0
        File system outputs: 0
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0

# method 2
        Command being timed: "find b/ -type f -delete"
        User time (seconds): 0.41
        System time (seconds): 14.46
        Percent of CPU this job got: 52%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 0:28.51
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 0
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 0
        Minor (reclaiming a frame) page faults: 11749
        Voluntary context switches: 14849
        Involuntary context switches: 11
        Swaps: 0
        File system inputs: 0
        File system outputs: 0
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0
# method 3
find c/ -type f | xargs -L 100 rm
~/test $ /usr/bin/time -v ./delete.sh
        Command being timed: "./delete.sh"
        User time (seconds): 2.06
        System time (seconds): 20.60
        Percent of CPU this job got: 54%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 0:41.69
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 0
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 0
        Minor (reclaiming a frame) page faults: 1764225
        Voluntary context switches: 37048
        Involuntary context switches: 15074
        Swaps: 0
        File system inputs: 0
        File system outputs: 0
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0

# method 4
find d/ -type f | xargs -L 100 -P 100 rm
~/test $ /usr/bin/time -v ./delete.sh
        Command being timed: "./delete.sh"
        User time (seconds): 2.86
        System time (seconds): 27.82
        Percent of CPU this job got: 89%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 0:34.32
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 0
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 0
        Minor (reclaiming a frame) page faults: 1764278
        Voluntary context switches: 929897
        Involuntary context switches: 21720
        Swaps: 0
        File system inputs: 0
        File system outputs: 0
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0

# method 5
~/test $ /usr/bin/time -v rm -rf f
        Command being timed: "rm -rf f"
        User time (seconds): 0.20
        System time (seconds): 14.80
        Percent of CPU this job got: 47%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 0:31.29
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 0
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 0
        Minor (reclaiming a frame) page faults: 176
        Voluntary context switches: 15134
        Involuntary context switches: 11
        Swaps: 0
        File system inputs: 0
        File system outputs: 0
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0

我真的十分好奇为什么Lee的方法要比其它的快,竟然比rm -rf也要快。如果有人知道,请写在下面,非常感谢。

[英文原文:A faster way to delete millions of files in a directory ]
分享这篇文章:

12 Responses to Linux技巧:一次删除一百万个文件的最快方法

  1. alvin says:

    test/
    ├── 1
    ├── 2
    └── 3

    alvin @wheezy ~ $ strace rm -r test/ 2>&1 | grep -i unlink
    unlinkat(4, “3”, 0) = 0
    unlinkat(4, “2”, 0) = 0
    unlinkat(4, “1”, 0) = 0
    unlinkat(AT_FDCWD, “test”, AT_REMOVEDIR) = 0

    rsync 第二段已经说了

    btw. 作者他应该知道为啥的吧?……

  2. 独行猫儿 says:

    没什么用,我试过了,充其量也就是6个小时和5小时50分钟的区别,意义不大。

  3. zhblue says:

    应该是rsync进程数量少,少调用n次磁盘sync吧

  4. baham says:

    赞一个

    给里 新颖

  5. jet says:

    直接删除硬盘分区表上的文件索引比较快吧

  6. Jak Wings says:

    看原文讨论区,好像是说 rsync 在 ext2/3 之类 rm 没有用 unlinkat 操作来删除,因此比 rsync 的慢。我在电脑里删 3 万多个文件也明显地体现出速度差异了,结果是 rsync 慢,而 rm 在 ext4 的文件系统的确有用到 unlinkat 。不过还是快不了多少。

  7. xx 对这篇文章的反应是垃圾

发表评论

邮箱地址不会被公开。 必填项已用*标注

此站点使用Akismet来减少垃圾评论。了解我们如何处理您的评论数据