Strony

wtorek, 19 lutego 2013

clogged pipe

What if? Say you want to execute something in a python sub-process and you want to read and resend all output data, no matter how big they are?

The simplest solution to use pipe is BAD idea - pipe has it's own buffer and what's worse size of this buffer is constant and cannot be changed by using some smart fnctl call any more. Let's run a simple test:



On my laptop I'm not able to send more than 64kB at a time, moreover its completely hanging:

[Tue, 19 Feb 19:25 E1][cibak@localhost:~]> python pipeBuf.py
send 0 B
read 0 B
send 1024 B
read 1024 B
send 2048 B
read 2048 B
[...cut...]
read 59392 B
send 60416 B
read 60416 B
send 61440 B
read 61440 B
send 62464 B
read 62464 B
send 63488 B
read 63488 B
send 64512 B
read 64512 B
^CTraceback (most recent call last):
File "pipeBad.py", line 8, in <module>
pin.send("."*i*1024)
KeyboardInterrupt
view raw gistfile1.txt hosted with ❤ by GitHub


Hmmm... So how can you send more than that? Let's say this way:

from multiprocessing import Process, Manager
def sender( S ):
print "sending 256 MB"
S["foo"] = "."*256*1024*1024
if __name__ == '__main__':
m = Manager()
S = m.dict()
p1 = Process( target=sender, args=( S, ) )
p1.start()
p1.join()
print "'read' it all", len(S["foo"])
view raw bigChunk.py hosted with ❤ by GitHub


Use a simple manager, man! More info here.

There is also possibility  to use shared memory objects (read this), but those are rather for storing not so huge amount of data.

Disclaimer: Before use, read the contents of the package insert or consult your physician or pharmacist because each drug used improperly threatens your life or health.





Brak komentarzy:

Prześlij komentarz