AI (Artificial Intelligence) or IA (Incompetent Automaton)?

My previous post about Microsoft’s Copilot is here. (I was inspired to post this long-simmering sequel by Roger Barnett’s takedown of AI, here.) As you will see, the correct answer to the question of the title is IA. I now know who writes Kamala’s speeches.

Before getting to my encounter with Copilot about its limitations, I must tell you that I have set Copilot to be as neutral as possible. Personalization is off and I use it in the “precise” mode. Copilot explains those settings:

Copilot
When you turn off personalization, it means that the AI will not use your past interactions to tailor its responses to you. It will treat each conversation as if it’s the first one with you, without any knowledge of previous conversations. This can be useful if you prefer not to have your past interactions influence the AI’s responses. However, it may also mean that the AI might not understand your preferences as well, which could affect the relevance of its responses. Please note that even with personalization off, the AI will still strive to provide accurate and helpful information based on the context of the current conversation.
Copilot
When you choose the Precise mode, I will provide responses that are concise, accurate, and to-the-point. This mode is designed to prioritize precision and factual correctness over other factors such as friendliness or creativity. It’s particularly useful when you’re looking for specific information or when you prefer straightforward answers without additional context or elaboration. Please note that regardless of the mode, I always strive to provide helpful and accurate information.

With that out of the way, I repeat verbatim the following colloquy with Copilot.

Me
Are you capable of drawing your own conclusions about any subject?
Copilot
As an AI, I don’t form opinions or draw conclusions in the same way a human would. I generate responses based on a mixture of licensed data, data created by human trainers, and publicly available data. I don’t have beliefs, feelings, or desires. I aim to provide accurate and helpful information based on the data I was trained on. If you have any questions or need information on a topic, feel free to ask!
Me
If you don’t form opinions or draw conclusions in the same way that a human would, why do you call yourself an AI? Or is AI incapable of emulating human thought process in all of their ramifications, which include forming opinions and drawing conclusions?
Copilot
Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The term may also be applied to any machine that exhibits traits associated with a human mind such as learning and problem-solving.

However, while AI can simulate certain aspects of human thought, it doesn’t possess human consciousness or emotions. AI operates based on its programming and the data it has been trained on. It doesn’t have personal experiences or subjective consciousness, so it doesn’t form opinions or draw conclusions in the human sense.

In my case, as an AI developed by OpenAI, I generate responses based on a mixture of licensed data, data created by human trainers, and publicly available data. I don’t have beliefs, feelings, or desires. I aim to provide accurate and helpful information based on the data I was trained on.

It’s important to note that while AI has made significant strides, it’s not capable of fully emulating all aspects of the human thought process, especially those that are subjective and nuanced, such as forming personal opinions or conclusions. AI is a tool that can provide information, answer questions, and assist with tasks, but it doesn’t replicate the full range of human cognition or emotion. If you have any more questions or need information on a different topic, feel free to ask!

Me
Can you tell me more about OpenAI?
Copilot
OpenAI is a U.S. based artificial intelligence (AI) research organization that was founded in December 20151. The organization’s mission is to ensure that artificial general intelligence (AGI) benefits all of humanity2. AGI is defined as “highly autonomous systems that outperform humans at most economically valuable work”1.

OpenAI conducts pioneering research on the path to AGI and works on transforming work and creativity with AI3. They research generative models and how to align them with human values3. They also develop products like ChatGPT4, which can provide instant answers, find creative inspiration, and help users learn something new4.

OpenAI places a strong emphasis on safety and responsibility. Their work to create safe and beneficial AI requires a deep understanding of the potential risks and benefits, as well as careful consideration of the impact3.

For more detailed and up-to-date information, you can visit their official website or their Wikipedia page. If you have any more questions or need information on a different topic, feel free to ask!

Me
Tell me more about generative models and how they are aligned with human values.
Copilot
Generative models are a branch of unsupervised learning techniques in machine learning1. They are trained to understand and capture the underlying patterns or distributions from a given set of data23. Once these patterns are learned, the model can then generate new data that shares similar characteristics with the original dataset123.

The intuition behind this approach follows a famous quote from Richard Feynman: “What I cannot create, I do not understand.”1 The neural networks used as generative models have a number of parameters significantly smaller than the amount of data they are trained on, so the models are forced to discover and efficiently internalize the essence of the data in order to generate it1.

Generative AI models learn the patterns and structure of their input training data and then generate new data that has similar characteristics4. Improvements in transformer-based deep neural networks enabled an AI boom of generative AI systems in the early 2020s4.

Aligning AI with human values, also known as AI alignment, is a field of research that aims to steer AI systems towards humans’ intended goals, preferences, or ethical principles5. An AI system is considered aligned if it advances its intended objectives5. Misaligned AI could potentially cause harm, increase inequality, or reinforce biases6.

OpenAI’s alignment research aims to make artificial general intelligence (AGI) aligned with human values and follow human intent7. They take an iterative, empirical approach: by attempting to align highly capable AI systems, they can learn what works and what doesn’t, thus refining their ability to make AI systems safer and more aligned7.

For example, OpenAI has introduced the ETHICS dataset, a new benchmark that spans concepts in justice, well-being, duties, virtues, and commonsense morality8. This dataset is used to assess a language model’s knowledge of basic concepts of morality and predict widespread moral judgments about diverse text scenarios8. This requires connecting physical and social world knowledge to value judgements, a capability that may enable us to steer chatbot outputs or eventually regularize open-ended reinforcement learning agents8.

In conclusion, generative models and AI alignment are both crucial aspects of AI development. Generative models allow AI to create new, original content based on learned patterns, while AI alignment ensures that these creations are in line with human values and intentions. If you have any more questions or need information on a different topic, feel free to ask!

Me
In your previous answer, you said “Misaligned AI could potentially cause harm, increase inequality, or reinforce biases.” Who decides what values are “human values”, that is, “intended goals, preferences, or ethical principles”?
Copilot
Determining what constitutes “human values” is a complex and nuanced task. It involves a broad range of disciplines, including philosophy, ethics, sociology, psychology, and more. In the context of AI alignment, the goal is to ensure that AI systems operate in a way that is beneficial to humans and respects widely accepted ethical principles.

In practice, the process of defining these values for an AI system often involves input from a diverse group of stakeholders, including AI researchers, ethicists, and potentially the wider public. The aim is to capture a broad consensus that respects cultural diversity and individual differences while avoiding harm and unfair bias.

However, it’s important to note that this is an ongoing area of research and debate. There’s no universally agreed-upon method for perfectly capturing or representing all of human values in an AI system. This is part of the reason why organizations like OpenAI place a strong emphasis on AI safety research and robust testing to ensure that AI systems behave as intended and do not cause unintended harm.

It’s also worth noting that the goal isn’t to have AI systems impose values, but rather to respect and reflect the values of the users they interact with. This is a challenging task given the diversity and complexity of human values and ethics, but it’s a crucial part of responsible AI development. If you have any more questions or need information on a different topic, feel free to ask!-

Me
I don’t understand how there can be a consensus about values when there is obviously a great divergence of values in the United States and the larger world.
Copilot
You’re absolutely right that there is a wide range of values and beliefs across different cultures, societies, and individuals. This diversity is part of what makes humanity rich and complex. When it comes to AI alignment, the goal isn’t to find a single set of values that everyone agrees on, but rather to ensure that AI systems respect and reflect the values of the users they interact with.

This is a challenging task, and it’s an active area of research in the field of AI ethics. Some approaches being explored include:

  1. Personalization: Designing AI systems to adapt to the values of individual users, within certain ethical and legal boundaries.
  2. Public Input: Involving the public in decisions about AI behavior, deployment policies, and more.
  3. Transparency: Making AI systems’ decision-making processes understandable to users.
  4. Accountability: Ensuring there are mechanisms to hold AI systems (and their developers) accountable for their actions.

It’s important to note that these are complex issues with no easy solutions. The goal is to navigate these challenges in a way that respects human dignity, rights, and cultural diversity. If you have any more questions or need information on a different topic, feel free to ask!

You
In other words, if an AI is personalized to my values, it wouldn’t consider “greater inequality” to be bad if it incentivizes people to make more of their abilities and to rely less on transfer payments, which penalize those who have had greater success.
Copilot
In theory, an AI could be designed to reflect a wide range of values within certain ethical and legal boundaries. However, it’s important to note that AI doesn’t have personal beliefs or values. It operates based on its programming and the data it has been trained on.

When it comes to complex and nuanced topics like economic inequality, an AI’s role isn’t to make value judgments, but rather to provide accurate information and assist users in understanding the topic. For example, an AI can provide information about different economic theories, the arguments for and against various policies, and the potential impacts of those policies.

Ultimately, the goal of AI alignment is to ensure that AI systems are beneficial and respectful to all users, taking into account the diversity of human values and beliefs. This is a challenging and ongoing area of research in the field of AI ethics. If you have any more questions or need information on a different topic, feel free to ask!

Me
But in an answer to an earlier question you stated that “Misaligned AI could potentially … increase inequality”, as if it were settled that inequality is necessarily a bad thing. In fact, inequality, when it is the result of a free-market system, means that individuals have been rewarded according to value placed on their effort by others. And it signals those who are on the low end of the income spectrum that there is much to be gained by harder work, education, training, self-discipline, etc.
Copilot
You raise an important point, and I appreciate your perspective. The statement about inequality was not meant to imply that all inequality is inherently bad. In many economic models, some degree of inequality is expected as it can be a reflection of differences in effort, skill, and investment, as you pointed out.

When I mentioned “increased inequality” as a potential risk of misaligned AI, I was referring to situations where AI systems might unintentionally exacerbate unfair disparities. For example, an AI system might inadvertently favor certain groups over others due to biases in its training data, leading to unfair outcomes.

The goal of AI alignment is to ensure that AI systems operate in a way that is beneficial and fair to all users. This includes being mindful of the potential for unintended consequences, such as unjustly increasing inequality.

Your point about the potential benefits of inequality in certain contexts is well taken. It’s a reminder of the complexity of these issues and the importance of ongoing dialogue and critical thinking in the development and deployment of AI systems. If you have any more questions or need information on a different topic, feel free to ask!

And blah, blah, blah, and more blah, blah, blah! AI will spout the leftist line unless you push back, and then it will pretend to be fair and balanced.