The simplest solution to use pipe is BAD idea - pipe has it's own buffer and what's worse size of this buffer is constant and cannot be changed by using some smart fnctl call any more. Let's run a simple test:
On my laptop I'm not able to send more than 64kB at a time, moreover its completely hanging:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
[Tue, 19 Feb 19:25 E1][cibak@localhost:~]> python pipeBuf.py | |
send 0 B | |
read 0 B | |
send 1024 B | |
read 1024 B | |
send 2048 B | |
read 2048 B | |
[...cut...] | |
read 59392 B | |
send 60416 B | |
read 60416 B | |
send 61440 B | |
read 61440 B | |
send 62464 B | |
read 62464 B | |
send 63488 B | |
read 63488 B | |
send 64512 B | |
read 64512 B | |
^CTraceback (most recent call last): | |
File "pipeBad.py", line 8, in <module> | |
pin.send("."*i*1024) | |
KeyboardInterrupt |
Hmmm... So how can you send more than that? Let's say this way:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from multiprocessing import Process, Manager | |
def sender( S ): | |
print "sending 256 MB" | |
S["foo"] = "."*256*1024*1024 | |
if __name__ == '__main__': | |
m = Manager() | |
S = m.dict() | |
p1 = Process( target=sender, args=( S, ) ) | |
p1.start() | |
p1.join() | |
print "'read' it all", len(S["foo"]) | |
Use a simple manager, man! More info here.
There is also possibility to use shared memory objects (read this), but those are rather for storing not so huge amount of data.
Disclaimer: Before use, read the contents of the package insert or consult your physician or pharmacist because each drug used improperly threatens your life or health.