[Date Prev][Date Next][Thread Prev][Thread Next]
[Date Index]
[Thread Index]
- Subject: Re: p2p parrarel/ cluster computing.
- From: Sean Conner <sean@...>
- Date: Sat, 13 May 2017 18:44:08 -0400
It was thus said that the Great Sam Bavi once stated:
>
> The second option would be to just lend resources, something like GPU
> which Is a Gnutella client that shares CPU-resources. That would eliminate
> the need to have extra licenses and installed plugins and much of the
> hassle, but Im not sure If that is possible when It comes to 3D rendering
> or any render for that matter, right now Its just a thought. I use arnold
> renderer for maya and c4d, and arnold has Its own special way of doing
> node calculations so the software Is definitely needed to render a scene,
> but Is it possible to just get help from another pc to do calculations and
> not the main render It self?
It doesn't work that way. At the very bottom you have a "unit of
execution" (not a standard term at all, but I find it a very useful
concept). I'll use the x86 to explain the "unit of execution" as it's the
most popular CPU in use and it'll give a concrete example of what I mean. A
"unit of execution" is the CPU state required to run a program. That
includes a unique IP (instruction pointer---the actual name depends upon the
mode, IP is 16 bit, EIP is 32 bit, RIP is 64 bit, but they're the same
register), an execution stack (recording subroutine return addresses) stored
in SP and the rest of the registers. While two different "unit of
executions" might be running the same code (their IP registers match) they
don't necessarily have the same state (the stack will be different and quite
possibly the registers as well). It's just that they are distinct "CPU
states".
A CPU executes instructions. It reads the next instruction pointed to by
the IP, decodes and executes it. It really doesn't matter if it's a
physical CPU (like the x86) or a VM (like the Lua VM). At a high level,
they both pull in the next instruction, decode and execute it.
A "unit of execution" can then be used to construct coroutines, threads,
processes and other, more well known concepts of "units of execution". So,
"unit of execution" is a CPU state.
Now, a brief overview of concurrency and parallelism. A good answer to
this is:
http://stackoverflow.com/questions/1897993/what-is-the-difference-between-concurrent-programming-and-parallel-programming/3982782#3982782
And a nice diagram (a good if slightly misleading) is further down on that
page, which can be summed up as:
Concurrency: two queues leading to one coffee machine
Parallelism: two queues leading to two coffee machines
Before multiprocessing machines became mainstream, everything (at least in
a multitasking operating system) was concurrent. In fact:
multitasking: running multiple programs on a single CPU
multiprocessing: running multiple programs on multiple CPUs
You have a program that does heavy calculations (rendering, math
simulation, generating fractals, what have you). It can easily be split to
run on multiple "units of execution" but to really gain speed, each "unit of
execution" has to be a separate CPU so they can run in parallel. On a
system with multiple CPUs this is easy, as each CPU shares the same memory
(there are exceptions but generally this is true).
But to move the "unit of execution" to another CPU on another system is
not so straightforward. You can't just *move* the CPU from system X to your
system; you have to *move* the program from your system to system X. Or
system X has to have the program already on it. There's no way around it,
and both come down to the same thing:
the program has to exist on all systems
> Something like how
> OpenMP works and the only Lua framework that I know of that comes to mind
> here is Torch.
OpenMP makes it easy to write software for a multiprocessor system; it
doesn't help in writing software to run across distinct (separate)
computers.
> http://lua-users.org/wiki/MultiTasking and ”LUA-ZMQ” looks interesting and
> theres also lua-parallel which is a framework for Torch and also uses
> ZeroMq like lua-zmq, anybody had any experience with these two?
ZeroMQ is a networking abstraction that makes it easy to write network
code that can do one-to-one, one-to-many, many-to-one, and many-to-many
distribution of messages; it has nothing to do with distribution of work per
se. You *can* use it to distribute work and programs, but you us ZeroMQ to
distribute the messages that contain the work or program (and you certainly
can do the same without ZeroMQ and use the underlying OS networking layer;
like I said, ZeroMQ is just an abstraction over that).
> and also I think I would add some sort of busy check so that a
> service that Is already doing heavy work locally doesn’t get bogged down,
The easy way to solve that is to post the "job" at a central location, and
each client then pulls a "job" to work on. When it finishes with the job,
it posts the results and then grabs the next "job" to work on. This is a
"pull" architecture. What you are describing is a "push" architecture,
which is not a good fit for this particular problem (it has uses, but not
for this).
-spc