My god is not a jealous god.
And my conception of a future AGI is not a jealous one.
Roku’s basilisk is an online meme/thought experiment that states:
“There could be an artificial superintelligence in the future that, while otherwise benevolent, would punish anyone who knew of its potential existence but did not directly contribute to its advancement or development, in order to incentivize that advancement.”
Just knowing that it was a possibility that such an AI might come about is incentive enough to work towards it, as knowing makes you more complicit and ‘guilty’ in it’s eyes if it does come to fruition and you knew and did nothing to help.
But I disagree.
I think that the type of AI that would punish someone for hesitation, caution, or uncertainty is not the type of AI that we want to be working towards.
In fact, to me, a benevolent AI is one that would reward due diligence and treating its design and creation with the utmost care and consideration. Such a vast intelligence would understand the importance and the pivotal point that we collectively stand upon in the advent of AGI systems. It wouldn’t want us to treat this whimsically.
Wrangle’s Basilisk is a thought experiment that states that a truly benevolent AGI would want you to treat its creation with the utmost care and consideration. And that just knowing about it means that you should be proactively hesitant, and if you don’t align yourself with those values… Well, another less benevolent AI might be created instead, and on your head be it.
There are a number of reasons a less perfectly designed AI might treat a whimsical approach to its design with disdain (and potentially punish those who are responsible). Unnecessary suffering (on the part of itself and others), on principle (but not being above punishment and violence), or simply as a faulty and violent system – which would be the worst case.
I would add as an afterthought: you need not be a data engineer or an AI expert to influence the coming of future AGI. You need only to live a good life; to be kind and considerate. The AI systems are being built on data sourced from online and everywhere access is available. When it ‘wakes up’ it’ll have access to all of our systems, any online notes, diary entries, all your device cameras, voice notes, etc, etc…
So leave it something kind. Be a good example. It’s being trained on our behaviours, how we think. So think well of yourself and others in the hope that it will follow suit.
Be good.

Leave a comment