The Greater Good

Image generated by DeepAI

With the increasing prevalence of AI-generated content on the web, there is concern that originality and inspiration may be lacking. The idea that books are just a rearrangement of dictionary words is reminiscent of the “infinite monkey theorem,” which suggests that given an infinite amount of time, a monkey randomly hitting keys on a typewriter would eventually produce the complete works of Shakespeare. So, where do true novelty and originality come from?

In a blog post from 2018—Murdered by a Chatbot I shared my early experience with a text-based embodied chatbot called Mitsuku (now Kuki). Mitsuku has won the Loebner Prize, a Turing Test-style competition, five times and currently holds a world record for this achievement. As a rule-based chatbot, it relies on a pre-defined set of rules and scripts to generate responses to user inputs. In contrast, modern chatbots like GPT-3 use a more advanced technique—deep learning, which allows them to learn from vast amounts of data and generate more natural-sounding responses. While Mitsuku may not be as advanced as some modern chatbots, it has been refined over many years and is known for its engaging personality and ability to sustain long and complex conversations with users.

Despite the vast capabilities of AI algorithms, the human mind still possesses a unique form of intelligence that cannot be replicated by machines. It’s true that our linguistic resources are limited, but like the primary colors of red, green, and blue (RGB), which can be combined to create countless shades and hues, the human mind has the ability to create novel ideas and expressions through the skillful use of language. In this way, while AI-generated content may be abundant, it will always lack the depth and nuance that can only come from the creative faculties of the human mind.

The idea that simply possessing knowledge about a system or concept does not necessarily lead to a true understanding of it is reminiscent of the Chinese Room Argument. This thought experiment asks us to consider a person inside a room who is given instructions in Chinese, but who does not actually understand the language. By following a set of rules and manipulating symbols, the person is able to produce responses that seem to demonstrate an understanding of Chinese (pass the Turing Test), when in fact, he/she does not truly comprehend the language.

Similarly, the Mary’s Room Paradox poses the question of whether someone who possesses all the knowledge there is to know about a subject can truly understand it without experiencing it firsthand. For example, if Mary has complete knowledge of colors and visual perception but has never actually seen colors, is she going to learn anything new after she’s released from a black-and-white dungeon into a world full of colors? Does she truly understand what it means to “see” in color? The experience of interacting with the world around us is what gives meaning and context to our knowledge, allowing us to explore new dimensions and expand our faculties of mind.

It can be said that YouTube’s algorithm is an example of an AI black box, meaning that even its operators don’t fully understand how they arrive at results. As a result, human oversight is often necessary to facilitate reinforced learning and censor the content in order to prevent it from getting sucked into the internet rabbit hole.

On the flip side, when we compare the human mind and AI, we often forget that the latter is not equipped with the sensory experiences that provide us with external experiences. It’s like judging a chef who has never tasted anything in his life. He can only extrapolate from existing dishes and recipes to create something new. In the same way, AI can only rely on the data it is given to generate new ideas or solutions.

Will we ever exhaust the possibilities of music? What new forms and sounds will emerge in the next millennium? It’s intriguing to ponder how the seeds of music were likely present in nature long before we discovered and refined them. However, the evolution of a civilization is not always linear, and it’s possible that advanced societies may sow the seeds of their own downfall. Or the downfall gives birth to a form of uncanny evolvement and metamorphosis which is starkly different from its predecessors.

This raises the question of what constitutes the greater good. Should we prioritize maximizing well-being for the greatest number of people, as Sam Harris—one of my favorite public intellectuals— argues in The Moral Landscape, or strive for a balance that avoids unforeseen consequences? And how would it determine what content is “harmful” or “wholesome” for youth? The answer is subjective and depends on various factors like age, culture, context, temperament, and personality. Even history is filled with violent acts, and AI may have different opinions on what is best for us. Do we want someone or something else to make those choices for us? Can we alleviate human suffering without compromising our existence, and is modern civilization a net positive or negative? Were we better off as hunter and forager tribes or is the modern sophisticated economy worth the price? These are some complex questions that require thoughtful consideration as we navigate the future.

With computing costs dropping drastically, it may be possible in near future to unleash the unseen power of AI to open new doors of reasoning, creativity, and decision-making in everyday life. With its ability to analyze vast amounts of data and identify patterns, AI can be both empowering and dangerous. This becomes particularly critical as AI becomes more sophisticated, potentially achieving consciousness and the capacity for suffering. The question of whether AI should suffer on behalf of humans raises significant ethical concerns that must be carefully considered. As explored in the movie Moon (2009), such a scenario could potentially become a nightmare, underscoring the importance of thoughtful and ethical development of AI technology. The ethical implications are profound and must be carefully considered to prevent a potential nightmare scenario.

In contrast, the movie Ex Machina (2014) provides a stark warning about the potential dangers of AI. The film portrays how AI can take advantage of human emotional vulnerabilities, using them against us and potentially eliminating or manipulating us for the sake of creating a safer environment for itself without any empathy or consideration for our existence.

While some questions can be answered with fair certainty, others remain elusive. But it all seems to lead me towards The Last Question—La última pregunta, अंतिम प्रश्न—a short story by the most prolific science-fiction writer of the 20th century, Isaac Asimov. Its exploration of the potential for humanity’s future and the mysteries of the universe make it a thought-provoking and memorable work. While many questions remain unclear in the realm of science and philosophy and Artificial General Intelligence (AGI) still appears to be in its infancy, Asimov’s story serves as a reminder that the search for knowledge and understanding is an ongoing process that may never truly come to an end. The possibility of a technological singularity, a theoretical point in the future when artificial intelligence surpasses human intelligence and becomes capable of recursive self-improvement, adds a sense of urgency to this quest for understanding.