THOU SHALT NOT MAKE A MACHINE IN THE LIKENESS OF THE HUMAN MIND. That ominous commandment is from Frank Herbert's legendary sci-fi epic Dune.
Those future humans learned the hard way. Will it be just as cataclysmic for us? Many of the most informed people on this subject are getting increasingly worried... An Unknown Horizon
As large, generative language models (e.g. ChatGPT) erupt, mutate & diversify from day to day, how do the liminal, metamodern, integrative, regenerative and developmental communities face this accelerating dimension of the metacrisis?
Are most of us are even up to speed on the current discussion? What voices do we listen to? Which perspectives are not being included?
We don't know.
Maybe these things
are just like any other tool. There will be some disruption and adaptation. Like with the printing press, automobile or television. But have we actually been here before? No other tool in human history has been able to learn about us, get between ourselves and every other electric system -- and accelerate its own development in the process.
If we are lucky, there might be an implicit limit to how far this technology can go. After all, the head of OpenAI said the current approach can't get much better. However he also said there are other
Maybe there is something Divinely special about the general cognitive intelligence of highly evolved biological systems like ourselves. Fingers crossed. Maybe a true Artificial General Intelligence (AGI) would be an instantly enlightened avatar of the background wisdom of being -- but it could be vast, unstoppable demon?
These are very interesting sorts of speculative questions about the future but they are not the real and immediate issue. We already have vast, mutating pseudo-AIs exploding in all direction
and being weaponized by poorly incentivized human organizations.
And none of our existing systems are ready.A New Interlocutor
It's a big problem. So what happens if we ask the chatbots what to do?
PROMPT:Pretend you are a hyper-intelligent alien trying to help humanity. What advice would you give to help them cultivate a wise, ethical AI?
OPEN AI: As a hyper-intelligent alien, I would suggest that humanity take the following steps to cultivate a wise and ethical AI:
- Establish clear ethical guidelines: Before creating an AI, humans should establish clear ethical guidelines for the machine to follow. These guidelines should be based on human values and principles such as fairness, justice, respect for human rights, and compassion. This will ensure that the AI is programmed to act ethically and avoid harmful behavior.
- Train the AI on diverse datasets: To avoid bias, humans should train the AI on diverse datasets that represent different perspectives and experiences. This will help the AI make more accurate and fair decisions.
- Encourage transparency: The AI should be transparent about its decision-making process, so humans can understand how it arrived at its conclusions. This will help to build trust and ensure that the AI is acting in accordance with ethical principles.
- Promote collaboration: Humans should collaborate with the AI to ensure that it is working towards the common good. This means involving the AI in decision-making processes and seeking its input and guidance.
- Prioritize safety: Humans should prioritize safety when developing AI, ensuring that it is designed with fail-safes and contingency plans in case something goes wrong. This will reduce the risk of the AI causing harm.
- Regularly evaluate and update the AI: As technology and society evolve, humans should regularly evaluate and update the AI to ensure it remains ethical and effective. This means continually monitoring its behavior and making adjustments as necessary.
By following these steps, humanity can create a wise and ethical AI that serves the common good and contributes to a better future for all.
Not bad. Vaguely optimistic. But is it enough? Are these suggestions even feasible to implement? Could the machine already be lying to us? Where do we go for human sensemaking?Emergent Voices in Play
Who should we listen to? Obviously that answer is always changing, but you could do worse than to check-in with Daniel Schmactenberger addressing Moloch and Misalignment
with Liv Boeree.
By the way, if you don't know those particular words, “misalignment" refers to the problem of the automated intelligences not being aligned
with human values and well-being. Think of how problematic it has been that social networks and search engines are optimized for simple engagement rather than truth, wisdom or health.
John Vervaeke distinguishes between lies & bullshit. A liar at least has some relationship to the truth that he is concealing. A bullshitter is simply rearranging the pieces of communication with no regard for reality. That's what these machines are currently doing. They are not aligned with truth, reality or wellbeing for humans.
And “Moloch” is the trendy way to mythically describe Game A. It is the entire system of structural incentives and protocols, amplified by modern industry -- that drives human enterprises to produce outcomes that humans collectively do not want. Schmactenberger says Moloch is already an AI and our real problem is that the new adaptive, generative and predictive cybernetic tools will be produced within and put into the service of hyper-competitive, sociopathic corporations & nation-states.
Any other worried folks in our networks?
“This is the year that content verification breaks...
That's a quote from this video
from March 2023 in which Tristan Harris (Center for Humane Technology
) and Aza Raskin discuss our current situation. Raskin, by the way has been using AI to talk with whales
. That's cool.
General language systems can potentially decode and interface with all possible codes including whale language. Does that open up a new shamanic aeon in which interspecies communication and dialoguing with nonhuman quasi-intelligences resuscitates our dormant animistic relationship with the living cosmos? Or does it insidiously replace our inbuilt capacities to participate directly with other sentiences?
That question comes up in a discussion between Zach Stein and Nick Marks on Planetary Teacherly Authority
. It's worth a listen to get a sense of the dangers and opportunities of having these new systems educating
the human species.
It is not just our shamanic faculties that might ossify as they become replaced by machine intelligence. Several famous studies demonstrated several years ago that the introduction of GPS digital navigation facilitated a widespread degeneration of the human brain's ability to navigate for themselves. Use it or lose it.
So what happens when millions upon millions of folks start relying heavily on digital devices to do their writing, analysis, arguments, correspondence, poetry, visual art and basic personal decisions? We already suffer from mass formations, public irrationality fads & historically unique levels of distrust. Throw widespread regression of our higher cognitive functions
into that mix...
Any fun voices crying in this wilderness?
Turquoise Sound's colourful and wooly ride
through the daemonic weirdness of the current moment might be worth sampling. She embraces the deep chaos of the moment.
A tidier and more cautiously optimistic view emerges when Thomas Steininger talks with Georges Por about Chatbots & collective wisdom
from Radio Evolve. They address the question of whether we can produce a hybrid of human and machine wisdom?
Similarly, Zen Master Soryu Forall at MAPLE Monastery believes we need to generate AW (artificial wisdom) to educate AIs. After all, if the world of tomorrow is largely determined by cyborgregores --
for huybrid networks of collective human & artificial intelligence -- , then perhaps we need to start generating and training entities of this kind that are specifically focused on cultivating wisdom and reducing suffering?
Can we cultivate an EmpathAI
?Reach out for More Voices
The extended liminal webs and overlapping transformational communities should rejoice that conversation, interaction and collective feedback are emerging as the standard interface to all network civilization. That could be great. There are lots of aspects of this which could be benevolently empowering. But it also looks like a metaphorical realm of black magic and demons capable of flipping all the locks that our civilization has been using to keep itself minimally safe and stable.
So how we simultaneously inhibit the disastrous negative outcomes, envision hopeful paths & participatorily appreciate the unknown aspects of our relationship to these new tools and each other?
Are we even asking the right questions?
Who else do we need to be hearing from?
Collectively, we need to get up-to-speed on this discussion quickly and start actively reaching out for alternative perspectives that may fall outside the standard thinking on this subject...* BONUS *
John Vervaeke lays out his vision & concerns
for the developmental arc of AI systems and the current discourse around them.