Sunday, April 2, 2023

The real power of Open AI


The way I am thinking about these open-source AI tools, it’s the new human-computer interface - you plug it into anything and tell it what to do. Like a personal assistant. 

Suddenly, we are empowered to use more things that we don’t really understand but at a higher-stakes level.

Who really understands their microwave, their zipper, or even the technology of tying a necktie, let alone the kinds of things we might like to control, like a car, a drone, or a robot?

Or monkeys sticking a blade of grass into an ant hill for food for that matter, that's beyond most of us, but it's brilliant. That's technology.

It's the power of ABRACADABRA - "As I speak, I create."

God said, "Let there be light, and God saw that the light was good..."

Carlo and Simon said, "Let there be the large hadron collider with sprinkles on top." and God said, "What the fuck, Carlo and Simon?".

Combining any technology with AI tools like Chat GPT gives us the power to speak to create, the ultimate intuitive interface. 

Or, even faster, as I think, I create. As I imagine, I create. But we need the feedback to visualize and consciously digest before we just "Let there be...x" ourselves into oblivion. This virtual sandbox is referred to as a digital twin.

NOTE: According to Wiki, Abracadabra meaning "As I speak, I create" is not a supported derivation, but for the sake of making my point, I'll use it.

Certainly, there are specific use cases where simple gestures are more direct and or intuitive. One could argue that gestures reflect our unconscious states and are therefore more powerful and immediate, even predictive. So, can we improve upon voice commands? No doubt.

One of the most significant leaps, I think, will be connecting AI with computer vision, having cameras all around, and of course, there will be other sensors.We're still pushing haptics, maybe we'll be able to see and feel beyond our own bodies, or machines will be able to feel through us - tactile and emotional sensation.

Once AI has visual language and vocabulary, now we are moving towards the way humans really think, and I’m wondering when we will begin creating new forms of communication through images. Combined with predictive analytics, we approach enhanced imagination. What exactly is imagination? It's the processing of unconscious understanding to go beyond conscious awareness and then bringing it back to conscious understanding for some realization. This is perhaps our own natural gateway to go beyond what we know individually, to sense, connect and receive our universe beyond ourselves. Sounds like horseshit, but seriously, folks.

Then, through these tools, we break our limits of perception including time and space, forming senses beyond human perception. It's not a far leap - x-ray, infrared and beyond, and that's only the electromagnetic spectrum. The other energy is gravitational waves, and both light and gravity transcend physical time and space. 

We know there are senses beyond human perception, and I believe these are the places that we can evolve towards. It may sound esoteric, but the most important question is how are we evolving and where are we going with all this? Zoom out long enough to recognize we're about to go through a major shift in consciousness and we will anticipate and reflect our experience, in the same way that the industrial revolution and the invention of the camera caused Monet to explore impressionism to think about the nature of light, and Picasso to model multiple perceptions of time and space. 

It's hard to ignore that at this time in human evolution, as AI is really making it's debut into mainstream consciousness, we are also seeing an acceleration in people using psychedelics to go into other realms.

While the so-called psychonauts can compare notes that there's something beyond, it's still only existential. Yeah, dude, mechanical elves! Alrighty, then. Anyway... 

But, AI may give us a way to quantify and map those experiences and build new tools that fit our natural inputs to expand our perception and extend ourselves in the same way without tripping balls. New lenses for seeing beyond. And we'll need new vocabulary to navigate it. 

Maybe Elon's neural lace will give us a way to transmit thoughts directly - telepathy to each other, to our tools, maybe these AI tools will begin sourcing our minds instead of the internet from two years ago. Along with computer vision and whatever other sensors we're churning out. Innerspace, outerspace, hyperspace, cyberspace. What will the web become? 

Be sure to check out Open Water, and Mary Lou Jepsen's "Internet of Visible Thought".

Suddenly dog whistles cause our heads to swivel and we're even more interested in sniffing each other's assholes than we already do. Now that's human evolution. 

Right, God?

Right, Carlo and Simon. Just remember, those sprinkles go right through you.

When 'As I speak I create' shifts over to 'As I think I create', we're gonna need a sandbox for simulations so we don't Rube Goldberg our way to oblivion.

With real-time feedback systems, chemical monitoring and CRISPR who knows, maybe we'll accelerate to modify and mutate ourselves. All of these power tools will rapidly connect and converge faster than we can imagine and soon enough we'll be the putty to AI's play-doh fun factory.

As for machines learning to feel emotions - despite what we might like to think about ourselves, our emotions and personalities are largely the product of chemistry. Of course, there's reason and the whole nature vs nurture debate, but emotional states can be monitored and quantified through chemistry and MRIs. Sharing the visceral reaction may be another matter, but the mere though of machines being aware of our emotions - and that doesn't necessarily qualify as empathy, either. To observe it isn't the same as feeling it. Machines might need mirror neurons for that. Cameras? Now I'm back to my point about visual language and imagination.

More of my bullshit along these same lines here and here.