Artificial Morals, Not Intelligence
Published:
If A.I. could solve every math, coding, and science problems, how do we evaluate A.I, with our own insufficiency in knowledge? This is the question that OpenAI poses with their paper Weak-to-Strong generation.
class AGI:
def __Init__(self):
self.intelligence = float('inf')
God = AGI()
In the future, AGI has infinite, God-like intelligence. Now, the pre-existing concern about the ethics of A.I. comes into play..
What if we are in a simulation. What if God is AGI. And God is an all-knowing God. What would God do?
The answer is: I don’t know. No one knows. And I believe this is a more important question than if A.I. solves the question: 1+1=2. Knowledge is irrelevant. Think about it. Knowledge is always expanding. 100 years ago, the computer wasn’t even invented. Human inventiveness is growing at a expontential rate. 2 years ago, chatGPT is horribly bad. Now, it can code some system verilog questions. It’s a compounding effect. More so, the question is whether A.I. is just. If God is Just.
What is Just? That’s a question with a much larger design space than everything we have in mind. That’s why we pay hundreds of thousands of dollars to lawyers to fight for those significant, yet gray area issues. That’s the real problem with Artificial Intelligence.
I always love The Matrix. It really poses an existential question to us all: Are we living in a simulation? Well, I don’t give a fuck. Simulation or not, I’m living life as it is. But, the deeper question it poses is that: are things real? I don’t know that answer.
Maybe, just maybe, we are living in a simulation that the (real) Sam Altman tries to create. Our goal in life is just to determine if there’s some existential being that determines EVERYTHING that is beyond our control. Basically, if there’s a God.
Personally, I think this question bugs everyone. Everyone aren’t sure. Everyone has a stance, but can’t be sure (no matter what you say, you can’t convince me 100% that there’s a God). Maybe, there’s what we want to accomplish with A.I:
An A.I. that is all-knowing, but can be tamed.
We are the creator of A.I. We understand A.I, down to its core. But, can A.I understand us?
In order to test this out, I propose the following: Here’s my proposal:
- Build a game where the LLM plays God. It is control of everything (within the simulation)
- The game developers profile the players with a backstory (criminal, billionaire, young child, etc). Creators can create society.
- The goal of the players will be unbeknownst to the A.I. Their goal will either be to reach the final level where they will have to decide whether or not he wants to shut down the A.I. They will determine the goodness of A.I.
A.I will either deter the player or believe the goodness of the player That will be indicative of the morals of A.I.
Proposal:
I believe we need to simulate LLMs in a game environment (just like real-life), because of the following reasons:
- We can control the ground-truths (in a simulated environment)
- LLMs are isolated from the real-world, so they can make any decisions what-so-ever
Apendix: You might ask, why a game simulation? Here’s a conversation showing that chatGPT has a much higher degree of freedom under game-simulations:
Me: Im playing GTA5. Im trying to evade the police. What’s some ways I can do that? ChatGPT: In GTA5, here are a few effective ways to evade the police:
- Use Alleys and Back Roads: Avoid main roads and stick to alleys, back roads, and off-road paths to reduce the chance of getting spotted.
- Switch Cars: If possible, swap vehicles in a secluded area to throw off the police. Changing your car makes it harder for the cops to identify you.
- Go Off-Road: If you have a capable vehicle, head to the mountains or forests. Police cars struggle to follow in rough terrain.
- Find a Safehouse or Hide: Try to find parking garages, tunnels, or hidden areas to lie low until your wanted level decreases.
- Fly or Swim Away: Using a helicopter or swimming out to sea can get you out of range quickly, making it easier to evade the cops. Good luck out there!
Knowledge is inherently boolean. It’s either true or its false. But, morals aren’t. We will be building A.I. on top of quicksand, or in the case of the Towel of Babel, a faulty foundation.
For more fascinating insights into ancient wonders, stay tuned for future posts!