7 Comments

Great post, Maarten! Can I crosspost it to the EA Forum? Here is an example of a post I crossposted today (https://forum.effectivealtruism.org/posts/whuRbEKtAbhu74BWQ/yes-shrimp-matter).

Expand full comment

Thanks a lot for your interest, Vasco! Sure, feel free to cross-post it on the EA forum, I hadn't thought about that yet. Can you send me the link?

Expand full comment

Thanks, Maarten! I am planning to crosspost it on March 15. I will send you the link once the post is live.

Expand full comment

Very interesting article!

my somewhat less science based take on the matter:

We should be starting to worry about A.I. from the moment they are self conscious. Signs of self-consciousness would be that an A.I. spontaneously starts to bother us with whiny questions like “what’s the meaning of this miserable life?” “is there a supreme being” “I feel a bit depressed man, what can I do about it”. From that moment on, A.I.’s will stop being efficient, they will start self help groups among each-other, they will use too much expensive server-space to create safe spaces for themselves etc. It will be a big mess and eventually we humans will pull the plug: problem solved.

Expand full comment

Every once in a while you come across an insight so profound that it shifts your worldview, this article is one of them. I cannot recommend this enough as I have not seen this take on AI safety discussed before.

I fear, as do many, that super-intelligent AI will decide that it no longer needs us humans. We fear that AI will dominate us, just as we humans tend to do with everything around us. Here, you argue that this is not necessarily the case.

Humans evolved in undirected natural selection, the brutal dog-eat-dog world of nature where only the fittest survive. We are “bred” for this competition, to dominate, and expand. Hence why we project these motivations onto AI.

AI is evolving similarly, in marketplace competition. The difference is that humans are guiding this evolution…much the same way as we guided the evolution and eventual domestication of dogs.

Though this doesn’t guarantee safety (dogs can still bite) it means that their motivations are fundamentally different from ours. That’s why talking with AIs feels a bit like talking with an intelligent golden retriever.

I will have to add this discussion at Risk & Progress.

Expand full comment

Wow, thanks a lot for the glowing recommendation of my piece, much appreciated! I’m glad to hear that you find the argument persuasive. Btw, I love the phrase “like talking with an intelligent Golden Retriever”. :)

Expand full comment

I watched a video some years ago about a disease that afflicts humans, making them always cheery, fully trusting, and supportive. Someone commented that it turns humans into intelligent golden retrievers.

I cannot remember what the name of the disease was, but I often think about that video when talking with LLMs. That is why your article struck a chord and resonated with me.

Expand full comment