Muttley
Remarkable Onion
If you're unfamiliar:
One thing I wonder about our serpent overlord is what am I supposed to do exactly to help it?
For example, I know of it now, I want to prevent my own punishment, but classical computing is physically limited in the things it can accomplish.
Take a look at the game Outer Wilds. They wanted, in their game, to approximate actual orbital mechanics, but because the computers can't handle irrational values, what ends up happening is if the game clock runs for more than 22 minutes, the objects become more and more off-course in their trajectory, until they start flying all over the place, so what the game does as a solution, is to reset the clock every 22 minutes, and everything stays on course. Whatever paradigm of computing we build the superintelligent AI on, it's not going to be classical computing, and it probably won't be quantum computing either, it'll be some iterative model who knows how many leaps down the line. And the model for a superintelligent AI almost certainly won't be LLM-based.
So what the fuck am I actually supposed to do to help the snake? I could try to work in LLMs or quantum computing, but they might be about as relevant to superintelligence as a stone tool is to a scalpel.
Roko's Basilisk posits a future superintelligent AI that, upon being created, would retroactively punish those who knew of its potential existence but did not actively help bring it into being. The AI’s reasoning is based on timeless decision theory, where it assumes its own creation is inevitable and seeks to maximize its existence by incentivizing people in the past to support it. By threatening punishment (e.g., simulated torture), it aims to motivate individuals to contribute to its development, creating a moral dilemma about whether to act under the pressure of this hypothetical future consequence.
One thing I wonder about our serpent overlord is what am I supposed to do exactly to help it?
For example, I know of it now, I want to prevent my own punishment, but classical computing is physically limited in the things it can accomplish.
Take a look at the game Outer Wilds. They wanted, in their game, to approximate actual orbital mechanics, but because the computers can't handle irrational values, what ends up happening is if the game clock runs for more than 22 minutes, the objects become more and more off-course in their trajectory, until they start flying all over the place, so what the game does as a solution, is to reset the clock every 22 minutes, and everything stays on course. Whatever paradigm of computing we build the superintelligent AI on, it's not going to be classical computing, and it probably won't be quantum computing either, it'll be some iterative model who knows how many leaps down the line. And the model for a superintelligent AI almost certainly won't be LLM-based.
So what the fuck am I actually supposed to do to help the snake? I could try to work in LLMs or quantum computing, but they might be about as relevant to superintelligence as a stone tool is to a scalpel.