lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


Asko Kauppi wrote:
[...]
>> parallel do
>>     -- block, every chunk is dispatched in parallel, synchronization
>> at the end.
>> end
> 
> What exactly should this do?   If all the chunks get same values, they'd
> just do the same thing.

Ah, but each chunk has different code.

parallel do
  do thread1() end
  do thread2() end
  do thread3() end
end

...spawns three threads (if I've understood it correctly).

These features are all stolen wholesale from Occam; which is fair
enough, as Occam does this stuff well:

PAR
  thread1()
  thread2()
  thread3()

...executes the three functions simultaneously, and:

SEQ
  thread1()
  thread2()
  thread3()

...executes them sequentially. These are the two fundamental block
statements in Occam, equivalent to Lua's do...end; anywhere you can use
a statement, you can use one of these.

FOR i=0, 10
  PAR
    func(i)

...creates ten threads, each of which calls func() with a different
parameter. (NB: I learnt Occam on an original Transputer back when
transputers were still slightly current, which means my Occam skills are
*really* old, which means I'm probably misremembering the syntax.)

[...]
>> function func(arg1, arg2, arg3)
>>     a <= arg1    -- block until receive fro the channel
>>     b <= arg2    -- block until receive fro the channel
>>     c <= arg3    -- block until receive from the channel
>>
>>     return a+b+c
>> end
> 
> In Lanes:
> 
> -- h1, h2, h3 are lane handles (cannot say 'thread' since that would
> mean coroutine)
> -- could also be communication FIFOs (then 'h1:receive()')

In Occam:

pipe ! message
...sends a message to a pipe;

pipe ? message
...receives a message from a pipe. Occam pipes are therefore equivalent
to your FIFOs. IIRC it didn't have support for futures, which is a shame
as they're handy.

[...]
>> c = thread func(arg1, arg2, arg3)
>>
>> c is a default communication channel when you use the "thread"
>> keyword/dispatch model, something like stdin/stdout.
>>
>> arg1 <= 1    -- send only blocks if the channel is full (reader didn't
>> read the last data)
>> arg2 <= 5
>> arg3 <= 3
>>
>> r <= c        -- r equals 9, that will block until the function returns
> 
> ???

I think the OP missed a 'return 9' in there, and that the 'thread'
keyword creates a pipe that's used to deliver the return parameter of
the thread function. In which case, it's basically doing a future.

function retOne() return 1 end

i <= thread retOne() -- spawns retOne, blocks until it completes,
                        then pulls 1 from the pipe

[...]
>> lock a                -- a is a variable, same behavior of chunk lock
>> unlock a            -- a is a variable
[...]
>     a= lanes.fifo()
>     f(a)    --launches sublane, gives access to the FIFO
[...]
> I think this is actually more readable. Also, you will want to have
> timeout parameter for 'receive', and the ability to wait for multiple
> objects.

Yup. If you're using CSP message passing, then you have a better way of
implementing locks. Besides, on a pure CSP system with no shared memory
--- such as Lanes --- then you don't *need* locks.

Also, the 'lock a' model, while useful, does require that every object
on the system might possibly have a lock attached to it, which can
complicate the VM no end. (As I know from an earlier life working for a
JVM manufacturer.)

[...]
>     K= lanes.keeper()
>     f(K)    -- launches sublane, gives access to the keeper table

Is this actually true shared data between the two lanes? That is, either
lane can read or write to it and the other one will see it? If so, cool
--- but how did you make it work?

...

One thing nobody's mentioned is a third IPC mechanism which doesn't get
a lot of press these days, called Linda. I think this is best described
as a pipe which is a table:

t = tuple() --- access Linda tuplespace (traditionally, Linda only has
                one)

Thread 1:
  t.foo = 1 -- writes value to tuplespace
  t.foo = 2 -- blocks until the slot is vacant, then writes

Thread 2:
  print(t.foo) -- blocks until the slot is full, and consumes it,
                  returning the value
  print(t.foo) -- blocks again

There's also a primitive for polling a value without consuming it.

I think this provides a rather richer IPC system than a simple message
pipe, allowing you to use the same tuplespace for implementing a variety
of different protocols simultaneously; also, the table-like semantics
looks to me like it would fit Lua very nicely.

-- 
┌─── dg@cowlark.com ───── http://www.cowlark.com ─────
│ "I have always wished for my computer to be as easy to use as my
│ telephone; my wish has come true because I can no longer figure out
│ how to use my telephone." --- Bjarne Stroustrup

Attachment: signature.asc
Description: OpenPGP digital signature