It is *so much harder* to fight confirmation bias in a world of AI and algorithms trained to give the asker exactly what they want. So I guess I'm saying, I 100% agree with you, which I think makes us both right.....huh.
Yes, I think what is most useful about the new technologies, but what is lacking in the individuals using them, is to ask, say, GPT, to provide a counter to the argument, to review the argument for flaws, to look for blindspots that may be there--in other words, the search should/could be much more well-rounded and show multiple points of view, but most just look for what they want and live in confirmation bias.
Excellent point. It would be wonderful if the AIs would ask you: "After I provide you with arguments for one view, would you like to me provide arguments for differing views?" Or something like that.
I think that would be the ideal, but I agree that most people aren’t going to use these technologies this way first, because they’re not inclined to, as you note, and second, because they increasingly can’t, because algorithms are designed to pitch people what they want, and because AI is constrained by the views of its makers.
Famously, Google’s Gemini refused to generate images of white men in response to prompts like “draw a picture of the pope” or even “draw a picture of white men,” because it had been designed in a way to generate images that comported with the values of its designers. Similarly, ask ChatGPT to write an essay arguing why the right to an abortion is absolute, and you’ll get a pretty solid essay. Ask it to write an essay about how a human right to life begins at conception, though, and ChatGPT will tell you that as an AI, it can’t answer values questions. (Or at least that’s what happened when I’ve tried this paired set of prompts.)
So while I agree with you in theory that AI could be used to find the flaws in one’s own arguments, I am concerned that it will only do that effectively if your arguments are counter to what the AI’s designers already think. I fear that this problem will worsen over time, as AI pulls language from existing information posted on the internet. So as more AI-generated arguments are online, AI-generated arguments will be increasingly the source material for new AI-generated arguments. I think that then if you start out with a groupthink effect (like it seems already exists in AI responses to prompts), this will snowball, making it increasingly difficult for even the best-intentioned, most earnestly searching for the truth individual out there to find arguments that contradict their own.
This is one of the reasons I am grateful for Substack, and for being able to correspond with you, @D.A. DiGerolamo, and others like @Maarten Boudry. It is so good to have a space where we can engage in thoughtful disagreement.
Yes, I agree that the problem will worsen over time. I recently read an article that said Musk’s purchase of Twitter was a long-term plan to pump that data into his own AI model and have up-to-date data to train the machine. I think that, what I have found so far, it is a good jumping off point for research (such as give me some sources that argue X, give me some theories of Y, etc.) and then I go off and review the sources and see what i needed and what is relevant. It has provided me some good ideas for topics and concepts I had not previously known or thought about and I then go into research mode and allow my curiosity to take hold. But I don’t trust the AI to be accurate and not have its own blindspots.
It is *so much harder* to fight confirmation bias in a world of AI and algorithms trained to give the asker exactly what they want. So I guess I'm saying, I 100% agree with you, which I think makes us both right.....huh.
Yes, I think what is most useful about the new technologies, but what is lacking in the individuals using them, is to ask, say, GPT, to provide a counter to the argument, to review the argument for flaws, to look for blindspots that may be there--in other words, the search should/could be much more well-rounded and show multiple points of view, but most just look for what they want and live in confirmation bias.
Excellent point. It would be wonderful if the AIs would ask you: "After I provide you with arguments for one view, would you like to me provide arguments for differing views?" Or something like that.
I think that would be the ideal, but I agree that most people aren’t going to use these technologies this way first, because they’re not inclined to, as you note, and second, because they increasingly can’t, because algorithms are designed to pitch people what they want, and because AI is constrained by the views of its makers.
Famously, Google’s Gemini refused to generate images of white men in response to prompts like “draw a picture of the pope” or even “draw a picture of white men,” because it had been designed in a way to generate images that comported with the values of its designers. Similarly, ask ChatGPT to write an essay arguing why the right to an abortion is absolute, and you’ll get a pretty solid essay. Ask it to write an essay about how a human right to life begins at conception, though, and ChatGPT will tell you that as an AI, it can’t answer values questions. (Or at least that’s what happened when I’ve tried this paired set of prompts.)
So while I agree with you in theory that AI could be used to find the flaws in one’s own arguments, I am concerned that it will only do that effectively if your arguments are counter to what the AI’s designers already think. I fear that this problem will worsen over time, as AI pulls language from existing information posted on the internet. So as more AI-generated arguments are online, AI-generated arguments will be increasingly the source material for new AI-generated arguments. I think that then if you start out with a groupthink effect (like it seems already exists in AI responses to prompts), this will snowball, making it increasingly difficult for even the best-intentioned, most earnestly searching for the truth individual out there to find arguments that contradict their own.
This is one of the reasons I am grateful for Substack, and for being able to correspond with you, @D.A. DiGerolamo, and others like @Maarten Boudry. It is so good to have a space where we can engage in thoughtful disagreement.
Yes, I agree that the problem will worsen over time. I recently read an article that said Musk’s purchase of Twitter was a long-term plan to pump that data into his own AI model and have up-to-date data to train the machine. I think that, what I have found so far, it is a good jumping off point for research (such as give me some sources that argue X, give me some theories of Y, etc.) and then I go off and review the sources and see what i needed and what is relevant. It has provided me some good ideas for topics and concepts I had not previously known or thought about and I then go into research mode and allow my curiosity to take hold. But I don’t trust the AI to be accurate and not have its own blindspots.