Discussion about this post

User's avatar
Vasilly's avatar

A very thoughtful read. However, I'd like to challenge two underlying assumptions I found here:

1. That we remain fully aware and capable of controlling domestication.

2. That consumers reliably control the market.

Regarding the first assumption, it's clear we can domesticate simpler entities whose complexity remains understandable and manageable—dogs are a good example. But domesticating a technology with exponentially growing complexity is a fundamentally different challenge. At some point, its complexity will surpass our comprehension, forcing us to select for visible traits rather than addressing the system's core logic (treating symptoms instead of the underlying issue).

The second assumption about consumer control follows naturally from the first. In many ways, we overestimate our actual influence over complex systems like markets. While it's true we have some impact through regulations, laws, and consumer preferences, we don't possess complete understanding or control. After all, no consumer consciously wants to be overweight or addicted to drugs, pornography, social media—or desires war—and yet these issues persist and even flourish within our market-driven systems.

I agree that the simplistic notion "natural selection alone would inevitably result in benevolent AI" doesn't hold water. Rather, the critical problem lies within systemic incentives. No matter how well we domesticate hamsters to be cute and harmless, placing them in a "Hunger Games" scenario will inevitably lead to predictable results.

I'd be curious to hear your thoughts on these points. Thank you again for such a thought-provoking article!

J.K. Lundblad's avatar

Every once in a while you come across an insight so profound that it shifts your worldview, this article is one of them. I cannot recommend this enough as I have not seen this take on AI safety discussed before.

I fear, as do many, that super-intelligent AI will decide that it no longer needs us humans. We fear that AI will dominate us, just as we humans tend to do with everything around us. Here, you argue that this is not necessarily the case.

Humans evolved in undirected natural selection, the brutal dog-eat-dog world of nature where only the fittest survive. We are “bred” for this competition, to dominate, and expand. Hence why we project these motivations onto AI.

AI is evolving similarly, in marketplace competition. The difference is that humans are guiding this evolution…much the same way as we guided the evolution and eventual domestication of dogs.

Though this doesn’t guarantee safety (dogs can still bite) it means that their motivations are fundamentally different from ours. That’s why talking with AIs feels a bit like talking with an intelligent golden retriever.

I will have to add this discussion at Risk & Progress.

26 more comments...

No posts

Ready for more?