处理大型文件20GB +时,如何在python中提高文件解析和I / O的速度

问题描述

这是下面的基本示例代码

def process(line):
    data = line.split("-|-")
    print(userpass)
    try:
        data1,data2 = data[2],data[3]
        finalline = f"{data1} some text here {data2}\n"
        with open("parsed.txt",'a',encoding="utf-8") as wf:
            wf.write(finalline)
    except:
        pass

with open("file.txt","r",encoding="utf-8") as f:
    for line in f:
        process(line)

这很好。但是有什么办法可以使它使用多个线程或内核来更快地运行?

或者在执行操作时以某种方式能够达到SSD的读写速度? 任何帮助将不胜感激!

解决方法

函数调用会在Python中产生大量开销。不要在文件的每一行上调用函数,而是内联定义。另外,不要重复打开相同的输出文件。打开一次并保持打开状态。

with open("file.txt","r",encoding="utf-8") as f,\
     open("parsed.txt","a",encoding="utf-8") as outh:
    for line in f:
        data = line.split("-|-")
        try:
            print(f"{data[2]} some text here {data[3]}",file=outh)
        except Exception:
            pass