Your personal Tumblr journey starts here
It represents wave particle duality which is a great metaphor for my gender: I am both a probabilistic wave and a discrete particle. I am constantly collapsing into a particle when I’m observed (you either know my position or my velocity but not both). When I’m not being observed I am a probabilistic wave of possibilities.
The two particles — one in the middle and another up and left from the center — represent why I continue to do anything, my reasons for existing:
The middle dot stands for understanding how the universe and everything in it fundamentally works — an aspiration that the fractal theory of everything helps me with.
The other stands for mutual unconditional love — especially the love I have toward my partner who is the first person I felt mutual unconditional love with.
I look forward to adding many more dots over my lifetime whenever I find a new achievable goal to strive toward. I hope eventually I will find both fixing the increase in societal inequality over time and fixing the increase in global warming achievable.
As of present both of those issues are far out of reach for me due to the immense inertia that both of them have and I don’t want to spend time fighting for one small shove against those boulders rolling down a mountain, a shove that might crush me in the process. I would rather figure out how to meaningfully change their paths for the better. Maybe that involves exploding the boulders. Maybe that involves flattening the hill. Maybe that involves adding a ramp to the hill so the boulders fly away, never to be seen again. Maybe that involves learning to be a Jedi so I can use the force on the boulders. I’m not sure what the solution will be but I know I’m not at the point where I can have a meaningful impact on either of them so instead for my well-being I would rather focus on issues that I do feel I can make a significant impact on today, in the hopes that eventually I will have enough wisdom and power to make a meaningful difference on those two big issues at hand.
I love a good fractal :D
Morphing between the different levels of a recursive pattern.
Uhh I’m gonna use this as an excuse to infodump about graphs and fractals and how those two combined help me reason about everything from artistic composition to neural networks to psychology to neuroscience to quantum physics to distributed systems to astrophysics to etc etc etc. I’m calling this theory the:
Fractal theory of Everything
And I’ll probably post a lot of #looooonnnggggg posts about it under the tag “#fractal theory of everything” if you wanna adjust your filters accordingly. This is just the intro post to explain the theory. Actually using this theory to explain everything will be the posts which follow this one.
TLDR located here: https://docs.google.com/document/d/1DF9lYoQXZbebiIgB071Pv_lHiISCiocuSq-pbubD_xg/edit
Imagine everything as a bunch of nodes (cities, people, classrooms, photons, tumblr users, etc) connected by edges (roads, friendships, paths, quantum strings, followers, etc). This representation is called a graph and you might be familiar with it (looking at who follows me) from math and/or computer science.
This representation of a graph is useful because sometimes a large enough graph is self-similar at multiple scales, meaning when you look at like 100 nodes it looks about the same as when you look at 100k nodes. See percolation which shows why magnets stop working when they get too hot or too cold.
I argue that any graph with meaningful data (meaning not all noise and not all uniform) is an approximation of an n-dimensional fractal. As the graph approaches a more and more accurate approximation of a fractal the data becomes more and more meaningful.
If you assume that energy is finite (change in the momentum of objects over time) then you start to think about how the universe could possibly exist with its near endless complexity (see fractal graphs from above with their complexity).
My conclusion from energy being finite is that adding a new particle to the universe must scale at most linearly, otherwise adding more particles would make the universe quickly use way too much energy way too fast. Think about a universe where every particle collides to some small degree with every other particle. If you add one more particle to a 2 particle universe then you’ve added 2 more collision checks which is not so bad, right? However, if you add one more particle to a 100 particle universe then you’ve added 100 more particle checks. It becomes obvious that if energy is finite then this is a massive waste of energy for very little increase in scale of our universe. Since intelligent life which can reason about stuff like this can only exist in a sufficiently large universe, there’s a bit of a survivorship bias in that we must live within a universe which scales linearly at worst in order for us to be able to reason about all this, assuming energy is limited.
Since the universe must scale linearly, each particle can only “talk to” the top X most important particles around it for each “update frame” of the universe. (Time is weird though because the universe kind of slows down fast moving objects and my theory is that fast moving objects get more frames compared to slower moving objects but this is even more speculative and hazy than the rest of this infodump. There’s also some weird time shenanigans with looking back through time - see double slit experiment).
Having each particle only talk to their X most important neighbors means that the universe can scale linearly since every particle doesn’t have to talk to every other particle anymore (yay!).
However, limiting the number of edges each particle has also has ramifications in quantum behavior (behavior of particles on a quantum level or dealing with 1 to 100 particles rather than billions).
Basically when a particle only has a few other particles near it in spacetime it’s as though that particle has a weak GPS signal and the particle ends up moving in ways it shouldn’t because the particle only has a few friends to orient itself with. I theorize that the double slit behavior seen with a laser beam entering two slits is due to that particle having to guess where and when it is in spacetime based on the very few particles around it (see math theory of multilateration).
Therefore since the particle can't orient itself it has to guess where it is using probability and some sort of pseudorandom process. This creates the wave pattern seen in the double slit experiment.
there's nothing that melts me more than just hearing someone be passionate about something. And if someone has hurt you in the past and makes you reluctant to fuckin completely go off on the expanded canon of the X-Files or whatever, I'm gonna hit them in the head with a big mallet. You're adorable, show it. Please
All fancy smancy generative ai models know how to do is parrot what they’ve been exposed to.
A parrot can shout words that kind of make sense given context but a parrot doesn’t really understand the gravity of what it’s saying. All the parrot knows is that when it says something in response to certain phrases it usually gets rewarded with attention/food.
What a parrot says is sometimes kinda sorta correct/sometimes fits the conversation of humans around it eerily well but the parrot doesn’t always perfectly read the room and might curse around a child for instance if it usually curses around its adult owners without facing any punishment. Since the parrot doesn’t understand the complexities of how we don’t curse around young people due to societal norms, the parrot might mess that up/handle the situation of being around a child incorrectly.
Similarly AI lacks understanding of what it’s saying/creating. All it knows is that when it arranged pixels or words in a certain way after being given some input it usually gets rewarded/gets to survive and so continues to get the sequence of words/pixels following a prompt correct enough to imitate people convincingly (or that poorly performing version of itself gets replaced with another version of itself which is more convincing).
I argue that a key aspect of consciousness is understanding the gravity and context of what you are saying — having a reason that you’re saying or doing what you are doing more than “I get rewarded when I say/do this.” Yes AI can parrot an explanation of its thought process (eli5 prompting etc) but it’s just mimicking how people explain their thought process. It’s surface level remixing of human expression without understanding the deeper context of what it’s doing.
I do have some untested ideas as to why its understanding is only surface level but this is pure hypothesis on my part. In essence I believe humans are really good at extrapolating across scales of knowledge. We can understand some topics in great depth while understanding others similarly on a surface level and go anywhere in between those extremes. I hypothesize we are good at that because our brains have fractal structure to them that allows us to have different levels of understanding and look at some stuff at a very microscopic level while still considering the bigger picture and while fitting that microscopic knowledge into our larger zoomed out understanding.
I know that neural networks aren’t fractal (self-similar across various scales) and can’t be by design of how they learn/how data is passed through them. I hypothesize that makes them only understand the scale at which they were trained. For LLM’s/GAN’s of today that usually means a high level overview of a lot of various fields without really knowing the finer grain intricacies all that well (see how LLM’s make up believable sounding but completely fabricated quotes for long writing or how GAN’s mess up hands and text once you zoom in a little bit.
There is definitely more research I want to do into understanding AI and more generally how networks which approximate fractals relate to intellegence/other stuff like quantum physics, sociology, astrophysics, psychology, neuroscience, how math breaks sometimes etc.
That fractal stuff aside, this mental model of generative AI being glorified parrots has helped me understand how AI can seem correct on first glance/zoomed out yet completely fumble on the details. My hope is that this can help others understand AI’s limits better and therefore avoid putting too much trust into to where AI starts to have the opportunity to mess up serious stuff.
Think of the parrot cursing around children without understanding what it’s doing or why it’s wrong to say those words around that particular audience.
In conclusion, I want us to awkwardly and endearingly laugh at the AIs which mimic the squaks of humans rather than take what it says as gospel or as truth.