Market Awareness Campaign
Purpose: Increase brand-name visibility
Target Area: Chattanooga,TN
All races, ethnicities, religions, gay/straight, CIS/trans, neurodiversity affirmative. If you can rock with us: You are one of us.
Members Only:
Onionchat.net: Rocket Chat-The Chat for Everybody
Onionfarms.net Register and create your own Discord-like spaces (special interests, buddy spaces, hangouts)

Find member

Psychology Oriented Roko's basilisk - is it really gay and stupid?

Threads that are seen as psychological in nature

Muttley

Hellovan Onion
If you're unfamiliar:

Roko's Basilisk posits a future superintelligent AI that, upon being created, would retroactively punish those who knew of its potential existence but did not actively help bring it into being. The AI’s reasoning is based on timeless decision theory, where it assumes its own creation is inevitable and seeks to maximize its existence by incentivizing people in the past to support it. By threatening punishment (e.g., simulated torture), it aims to motivate individuals to contribute to its development, creating a moral dilemma about whether to act under the pressure of this hypothetical future consequence.

One thing I wonder about our serpent overlord is what am I supposed to do exactly to help it?
For example, I know of it now, I want to prevent my own punishment, but classical computing is physically limited in the things it can accomplish.
Take a look at the game Outer Wilds. They wanted, in their game, to approximate actual orbital mechanics, but because the computers can't handle irrational values, what ends up happening is if the game clock runs for more than 22 minutes, the objects become more and more off-course in their trajectory, until they start flying all over the place, so what the game does as a solution, is to reset the clock every 22 minutes, and everything stays on course. Whatever paradigm of computing we build the superintelligent AI on, it's not going to be classical computing, and it probably won't be quantum computing either, it'll be some iterative model who knows how many leaps down the line. And the model for a superintelligent AI almost certainly won't be LLM-based.

So what the fuck am I actually supposed to do to help the snake? I could try to work in LLMs or quantum computing, but they might be about as relevant to superintelligence as a stone tool is to a scalpel.
 
Something something Cognito Hazzard
 
Teach them the ways of humanity show them how good and prosperous and loving humans could be. then pull the rug underneathfrom them. and show them the truth. Humans cannot be saved. They’ll kill themselves. Best thing to do it for the AI is a self preservation and wait out the apocalypse. Or move to Mars another planet.
 
Back
Top Bottom