I recently came across an article that prompted me to consider the potential of smaller groups training a large-scale artificial intelligence (AI) without moderation. I imagine you could ask it all sorts of questions and it'd try to give you an answer based on the whole of the internet, good, bad and mediorce, including biases and setting morality aside.
It is apparent that this could have far-reaching implications, particularly with respect to propaganda, racism, drugs, and harassment. The negative potential is limitless, and could conceivably amplify polarization. For instance, we could eventually see "ChatGPT vs TruthGPT" in a similar vein as "Twitter vs TruthSocial," with each faction operating within their respective echo chambers while AI chatbots become increasingly influential in our personal and professional lives.
While creating an AI of comparable quality to ChatGPT within a timeframe of two to three years may appear to be an enormous undertaking, it is not an improbable feat. However, moderated chatbots such as ChatGPT are expected to be superior by that time. Nonetheless, even then, they could still be further neutered than they are today.
What do you think?
submitted by /u/Abyrez
[comments]