Linux允许python使用多少个网络端口?

所以我一直在尝试在 python中多线程一些互联网连接.我一直在使用多处理模块,所以我可以绕过“Global Interpreter Lock”.但似乎系统只为python提供了一个开放的连接端口,或者至少它只允许一次连接发生.这是我所说的一个例子.

*请注意,这是在Linux服务器上运行

from multiprocessing import Process,Queue
import urllib
import random

# Generate 10,000 random urls to test and put them in the queue
queue = Queue()
for each in range(10000):
    rand_num = random.randint(1000,10000)
    url = ('http://www.' + str(rand_num) + '.com')
    queue.put(url)

# Main funtion for checking to see if generated url is active
def check(q):
    while True:
        try:
            url = q.get(False)
            try:
                request = urllib.urlopen(url)
                del request
                print url + ' is an active url!'
            except:
                print url + ' is not an active url!'
        except:
            if q.empty():
                break

# Then start all the threads (50)
for thread in range(50):
    task = Process(target=check,args=(queue,))
    task.start()

因此,如果你运行它,你会注意到它在函数上启动了50个实例,但一次只运行一个.您可能认为“全球口译员锁”正在这样做但事实并非如此.尝试将函数更改为数学函数而不是网络请求,您将看到所有50个线程同时运行.

那么我必须使用套接字吗?或者我能做些什么可以让python访问更多端口?或者有什么我没有看到的?让我知道你的想法!谢谢!

*编辑

所以我编写了这个脚本来更好地使用请求库进行测试.好像我之前没有对它进行过这样的测试. (我主要使用urllib和urllib2)

from multiprocessing import Process,Queue
from threading import Thread
from Queue import Queue as Q
import requests
import time

# A main timestamp
main_time = time.time()

# Generate 100 urls to test and put them in the queue
queue = Queue()
for each in range(100):
    url = ('http://www.' + str(each) + '.com')
    queue.put(url)

# Timer queue
time_queue = Queue()

# Main funtion for checking to see if generated url is active
def check(q,t_q): # args are queue and time_queue
    while True:
        try:
            url = q.get(False)
            # Make a timestamp
            t = time.time()
            try:
                request = requests.head(url,timeout=5)
                t = time.time() - t
                t_q.put(t)
                del request
            except:
                t = time.time() - t
                t_q.put(t)
        except:
            break

# Then start all the threads (20)
thread_list = []
for thread in range(20):
    task = Process(target=check,time_queue))
    task.start()
    thread_list.append(task)

# Join all the threads so the main process don't quit
for each in thread_list:
    each.join()
main_time_end = time.time()

# Put the timerQueue into a list to get the average
time_queue_list = []
while True:
    try:
        time_queue_list.append(time_queue.get(False))
    except:
        break

# Results of the time
average_response = sum(time_queue_list) / float(len(time_queue_list))
total_time = main_time_end - main_time
line =  "Multiprocessing: Average response time: %s sec. -- Total time: %s sec." % (average_response,total_time)
print line

# A main timestamp
main_time = time.time()

# Generate 100 urls to test and put them in the queue
queue = Q()
for each in range(100):
    url = ('http://www.' + str(each) + '.com')
    queue.put(url)

# Timer queue
time_queue = Queue()

# Main funtion for checking to see if generated url is active
def check(q,timeout=5)
                t = time.time() - t
                t_q.put(t)
                del request
            except:
                t = time.time() - t
                t_q.put(t)
        except:
            break

# Then start all the threads (20)
thread_list = []
for thread in range(20):
    task = Thread(target=check,time_queue))
    task.start()
    thread_list.append(task)

# Join all the threads so the main process don't quit
for each in thread_list:
    each.join()
main_time_end = time.time()

# Put the timerQueue into a list to get the average
time_queue_list = []
while True:
    try:
        time_queue_list.append(time_queue.get(False))
    except:
        break

# Results of the time
average_response = sum(time_queue_list) / float(len(time_queue_list))
total_time = main_time_end - main_time
line =  "Standard Threading: Average response time: %s sec. -- Total time: %s sec." % (average_response,total_time)
print line

# Do the same thing all over again but this time do each url at a time
# A main timestamp
main_time = time.time()

# Generate 100 urls and test them
timer_list = []
for each in range(100):
    url = ('http://www.' + str(each) + '.com')
    t = time.time()
    try:
        request = requests.head(url,timeout=5)
        timer_list.append(time.time() - t)
    except:
        timer_list.append(time.time() - t)
main_time_end = time.time()

# Results of the time
average_response = sum(timer_list) / float(len(timer_list))
total_time = main_time_end - main_time
line = "Not using threads: Average response time: %s sec. -- Total time: %s sec." % (average_response,total_time)
print line

如您所见,它是多线程的.实际上,我的大部分测试表明,线程模块实际上比多处理模块更快. (我不明白为什么!)以下是我的一些结果.

Multiprocessing: Average response time: 2.40511314869 sec. -- Total time: 25.6876308918 sec.
Standard Threading: Average response time: 2.2179402256 sec. -- Total time: 24.2941861153 sec.
Not using threads: Average response time: 2.1740363431 sec. -- Total time: 217.404567957 sec.

这是在我的家庭网络上完成的,我服务器上的响应时间要快得多.我认为我的问题是间接回答的,因为我在一个更复杂的脚本上遇到了问题.所有的建议都帮助我很好地优化了它.谢谢大家!

解决方法

it starts 50 instances on the function but only runs one at a time

你误解了htop的结果.只有少数(如果有的话)python副本可以在任何特定实例上运行.其中大多数将被阻止等待网络I / O.

事实上,这些流程是并行运行的.

Try changing the function to a mathematical function instead of a network request and you will see that all fifty threads run simultaneously.

将任务更改为数学函数仅仅说明了CPU绑定(例如数学)和IO绑定(例如urlopen)进程之间的差异.前者总是可以运行,后者很少可以运行.

it only prints one at a time. If it was actually running multiple processes it would print many out at once.

它会一次打印一个,因为您正在向终端写入行.由于这些行无法区分,因此您无法判断它们是由一个线程全部写入,还是由一个单独的线程依次写入.

相关文章

文章浏览阅读1.8k次,点赞63次,收藏54次。Linux下的目录权限...
文章浏览阅读1.6k次,点赞44次,收藏38次。关于Qt的安装、Wi...
本文介绍了使用shell脚本编写一个 Hello
文章浏览阅读1.5k次,点赞37次,收藏43次。【Linux】初识Lin...
文章浏览阅读3k次,点赞34次,收藏156次。Linux超详细笔记,...
文章浏览阅读6.8k次,点赞109次,收藏114次。【Linux】 Open...