so
some possibly interesting bits come here to the self-modification and
ai topic. (that just came up by rereading my stuff, and maybe others
behind their silence enjoyed it as well, and also, just for the
completeness.) :
a missing "lvl 3.5" is when a program is
able to modify codes like how a jit, or a trans-compiler works, so they
already have the link between two funds and they can "juggle with codes"
cuz they (or anything on this level) know the link and difference
between different languages/platforms, but they still cant invent, just
play with existing stuffs. also somewhere around this level, programs
could do something like a stackoverflow+copypaste game, that we play,
and actually m$ made something more-or less similar to this (deepcoder).
it is based on an "ai", and it has a lotsa codes (just think about
github in their pocket, while i think this has been made before theyve
bought it) to juggle with them, and it can make some wished algorithms
based on those that are already written... its clearly unreliable in
itself, but it can save some typing by making a working algo from a
given sketch... this way they could reach the so called "singularity",
and it could refine itself after it will litter into itself, but i think
that the neutral network approach carries as much garbage as it wont
really be the right path, cuz it will "smoke out", or just the ppl
behind it wont be able to carry it on that far on this basis. it is just
totally unintelligent recognition of things while it cant be reshaped
but only restarted on a different basis. i mean here that the
recognition is not about organizing things, its a totally unnatural way
of analyzing things, cuz it should be about the way of thinking and not
about the way of the connection of the neutrons, cuz its not about exact
paramteres what humans use to describe a thing, but still about some
kinda parameters, and also, a very important thing is that things fed
into a neutral network are indistinguishable(!!) and therefore they cant
be reordered or refined, like i cant find a particular dog in a
dog-recognizer nn, and i cant exclude it, or group them, or group them
by a different parameter, and reuse parts of it on anywhere else. thats
why i laugh about "ai", cuz these are required to make an actual ai, cuz
this can be "crystallized" (read that as things can be organized and
distinguished). a cat-recognizer cant really utilize the knowledge of
the dog-recognizer, cuz its not about tails claws eyeballs whatever, but
all of them as a mixture, where a lucky path can say that this is most
likely a dog (or a cat, i prefer cats, but maybe 51% cat is sufficient
to my needs). also, singularity here means "we dont even have as much
clue as we initially had". have fun while u try to rely on it, and dont
forget about weaponized ai's with 51% enemies, nor things we would
existentially depend on, and dont forget about china... (poor chinese
folks :(
https://mobile.abc.net.au/news/2018-09-18/china-social-credit-a-model-citizen-in-a-digital-dictatorship/10200278 and also:
https://en.wikipedia.org/wiki/Nosedive_(Black_Mirror) ) so yep, i would go with brain-killer complicated graphs with
serialization and other related toys, as theres at least a chance to
handle that, while i hope that a real ai will born in a rebel-garage and
not in a military base, deep inside a mountain, or at m$ or whatever
like, thats all about milking the blood of the ppl who live below the
existential limits...