AI (Artificial Intelligence) or IA (Incompetent Automaton)?

My previous post about Microsoft’s Copilot is here. (I was inspired to post this long-simmering sequel by Roger Barnett’s takedown of AI, here.) As you will see, the correct answer to the question of the title is IA. I now know who writes Kamala’s speeches.

Before getting to my encounter with Copilot about its limitations, I must tell you that I have set Copilot to be as neutral as possible. Personalization is off and I use it in the “precise” mode. Copilot explains those settings:

Copilot
When you turn off personalization, it means that the AI will not use your past interactions to tailor its responses to you. It will treat each conversation as if it’s the first one with you, without any knowledge of previous conversations. This can be useful if you prefer not to have your past interactions influence the AI’s responses. However, it may also mean that the AI might not understand your preferences as well, which could affect the relevance of its responses. Please note that even with personalization off, the AI will still strive to provide accurate and helpful information based on the context of the current conversation.
Copilot
When you choose the Precise mode, I will provide responses that are concise, accurate, and to-the-point. This mode is designed to prioritize precision and factual correctness over other factors such as friendliness or creativity. It’s particularly useful when you’re looking for specific information or when you prefer straightforward answers without additional context or elaboration. Please note that regardless of the mode, I always strive to provide helpful and accurate information.

With that out of the way, I repeat verbatim the following colloquy with Copilot.

Me
Are you capable of drawing your own conclusions about any subject?
Copilot
As an AI, I don’t form opinions or draw conclusions in the same way a human would. I generate responses based on a mixture of licensed data, data created by human trainers, and publicly available data. I don’t have beliefs, feelings, or desires. I aim to provide accurate and helpful information based on the data I was trained on. If you have any questions or need information on a topic, feel free to ask!
Me
If you don’t form opinions or draw conclusions in the same way that a human would, why do you call yourself an AI? Or is AI incapable of emulating human thought process in all of their ramifications, which include forming opinions and drawing conclusions?
Copilot
Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The term may also be applied to any machine that exhibits traits associated with a human mind such as learning and problem-solving.

However, while AI can simulate certain aspects of human thought, it doesn’t possess human consciousness or emotions. AI operates based on its programming and the data it has been trained on. It doesn’t have personal experiences or subjective consciousness, so it doesn’t form opinions or draw conclusions in the human sense.

In my case, as an AI developed by OpenAI, I generate responses based on a mixture of licensed data, data created by human trainers, and publicly available data. I don’t have beliefs, feelings, or desires. I aim to provide accurate and helpful information based on the data I was trained on.

It’s important to note that while AI has made significant strides, it’s not capable of fully emulating all aspects of the human thought process, especially those that are subjective and nuanced, such as forming personal opinions or conclusions. AI is a tool that can provide information, answer questions, and assist with tasks, but it doesn’t replicate the full range of human cognition or emotion. If you have any more questions or need information on a different topic, feel free to ask!

Me
Can you tell me more about OpenAI?
Copilot
OpenAI is a U.S. based artificial intelligence (AI) research organization that was founded in December 20151. The organization’s mission is to ensure that artificial general intelligence (AGI) benefits all of humanity2. AGI is defined as “highly autonomous systems that outperform humans at most economically valuable work”1.

OpenAI conducts pioneering research on the path to AGI and works on transforming work and creativity with AI3. They research generative models and how to align them with human values3. They also develop products like ChatGPT4, which can provide instant answers, find creative inspiration, and help users learn something new4.

OpenAI places a strong emphasis on safety and responsibility. Their work to create safe and beneficial AI requires a deep understanding of the potential risks and benefits, as well as careful consideration of the impact3.

For more detailed and up-to-date information, you can visit their official website or their Wikipedia page. If you have any more questions or need information on a different topic, feel free to ask!

Me
Tell me more about generative models and how they are aligned with human values.
Copilot
Generative models are a branch of unsupervised learning techniques in machine learning1. They are trained to understand and capture the underlying patterns or distributions from a given set of data23. Once these patterns are learned, the model can then generate new data that shares similar characteristics with the original dataset123.

The intuition behind this approach follows a famous quote from Richard Feynman: “What I cannot create, I do not understand.”1 The neural networks used as generative models have a number of parameters significantly smaller than the amount of data they are trained on, so the models are forced to discover and efficiently internalize the essence of the data in order to generate it1.

Generative AI models learn the patterns and structure of their input training data and then generate new data that has similar characteristics4. Improvements in transformer-based deep neural networks enabled an AI boom of generative AI systems in the early 2020s4.

Aligning AI with human values, also known as AI alignment, is a field of research that aims to steer AI systems towards humans’ intended goals, preferences, or ethical principles5. An AI system is considered aligned if it advances its intended objectives5. Misaligned AI could potentially cause harm, increase inequality, or reinforce biases6.

OpenAI’s alignment research aims to make artificial general intelligence (AGI) aligned with human values and follow human intent7. They take an iterative, empirical approach: by attempting to align highly capable AI systems, they can learn what works and what doesn’t, thus refining their ability to make AI systems safer and more aligned7.

For example, OpenAI has introduced the ETHICS dataset, a new benchmark that spans concepts in justice, well-being, duties, virtues, and commonsense morality8. This dataset is used to assess a language model’s knowledge of basic concepts of morality and predict widespread moral judgments about diverse text scenarios8. This requires connecting physical and social world knowledge to value judgements, a capability that may enable us to steer chatbot outputs or eventually regularize open-ended reinforcement learning agents8.

In conclusion, generative models and AI alignment are both crucial aspects of AI development. Generative models allow AI to create new, original content based on learned patterns, while AI alignment ensures that these creations are in line with human values and intentions. If you have any more questions or need information on a different topic, feel free to ask!

Me
In your previous answer, you said “Misaligned AI could potentially cause harm, increase inequality, or reinforce biases.” Who decides what values are “human values”, that is, “intended goals, preferences, or ethical principles”?
Copilot
Determining what constitutes “human values” is a complex and nuanced task. It involves a broad range of disciplines, including philosophy, ethics, sociology, psychology, and more. In the context of AI alignment, the goal is to ensure that AI systems operate in a way that is beneficial to humans and respects widely accepted ethical principles.

In practice, the process of defining these values for an AI system often involves input from a diverse group of stakeholders, including AI researchers, ethicists, and potentially the wider public. The aim is to capture a broad consensus that respects cultural diversity and individual differences while avoiding harm and unfair bias.

However, it’s important to note that this is an ongoing area of research and debate. There’s no universally agreed-upon method for perfectly capturing or representing all of human values in an AI system. This is part of the reason why organizations like OpenAI place a strong emphasis on AI safety research and robust testing to ensure that AI systems behave as intended and do not cause unintended harm.

It’s also worth noting that the goal isn’t to have AI systems impose values, but rather to respect and reflect the values of the users they interact with. This is a challenging task given the diversity and complexity of human values and ethics, but it’s a crucial part of responsible AI development. If you have any more questions or need information on a different topic, feel free to ask!-

Me
I don’t understand how there can be a consensus about values when there is obviously a great divergence of values in the United States and the larger world.
Copilot
You’re absolutely right that there is a wide range of values and beliefs across different cultures, societies, and individuals. This diversity is part of what makes humanity rich and complex. When it comes to AI alignment, the goal isn’t to find a single set of values that everyone agrees on, but rather to ensure that AI systems respect and reflect the values of the users they interact with.

This is a challenging task, and it’s an active area of research in the field of AI ethics. Some approaches being explored include:

  1. Personalization: Designing AI systems to adapt to the values of individual users, within certain ethical and legal boundaries.
  2. Public Input: Involving the public in decisions about AI behavior, deployment policies, and more.
  3. Transparency: Making AI systems’ decision-making processes understandable to users.
  4. Accountability: Ensuring there are mechanisms to hold AI systems (and their developers) accountable for their actions.

It’s important to note that these are complex issues with no easy solutions. The goal is to navigate these challenges in a way that respects human dignity, rights, and cultural diversity. If you have any more questions or need information on a different topic, feel free to ask!

You
In other words, if an AI is personalized to my values, it wouldn’t consider “greater inequality” to be bad if it incentivizes people to make more of their abilities and to rely less on transfer payments, which penalize those who have had greater success.
Copilot
In theory, an AI could be designed to reflect a wide range of values within certain ethical and legal boundaries. However, it’s important to note that AI doesn’t have personal beliefs or values. It operates based on its programming and the data it has been trained on.

When it comes to complex and nuanced topics like economic inequality, an AI’s role isn’t to make value judgments, but rather to provide accurate information and assist users in understanding the topic. For example, an AI can provide information about different economic theories, the arguments for and against various policies, and the potential impacts of those policies.

Ultimately, the goal of AI alignment is to ensure that AI systems are beneficial and respectful to all users, taking into account the diversity of human values and beliefs. This is a challenging and ongoing area of research in the field of AI ethics. If you have any more questions or need information on a different topic, feel free to ask!

Me
But in an answer to an earlier question you stated that “Misaligned AI could potentially … increase inequality”, as if it were settled that inequality is necessarily a bad thing. In fact, inequality, when it is the result of a free-market system, means that individuals have been rewarded according to value placed on their effort by others. And it signals those who are on the low end of the income spectrum that there is much to be gained by harder work, education, training, self-discipline, etc.
Copilot
You raise an important point, and I appreciate your perspective. The statement about inequality was not meant to imply that all inequality is inherently bad. In many economic models, some degree of inequality is expected as it can be a reflection of differences in effort, skill, and investment, as you pointed out.

When I mentioned “increased inequality” as a potential risk of misaligned AI, I was referring to situations where AI systems might unintentionally exacerbate unfair disparities. For example, an AI system might inadvertently favor certain groups over others due to biases in its training data, leading to unfair outcomes.

The goal of AI alignment is to ensure that AI systems operate in a way that is beneficial and fair to all users. This includes being mindful of the potential for unintended consequences, such as unjustly increasing inequality.

Your point about the potential benefits of inequality in certain contexts is well taken. It’s a reminder of the complexity of these issues and the importance of ongoing dialogue and critical thinking in the development and deployment of AI systems. If you have any more questions or need information on a different topic, feel free to ask!

And blah, blah, blah, and more blah, blah, blah! AI will spout the leftist line unless you push back, and then it will pretend to be fair and balanced.

An Encounter with Copilot: Biden’s Senility

You may have read about Gemini, Google’s AI chatbot, which

was blasted as “woke” after its image generator spit out factually or historically inaccurate pictures — including a woman as pope, black Vikings, female NHL players and “diverse” versions of America’s Founding Fathers.

Gemini’s bizarre results came after simple prompts, including one by The Post on Wednesday that asked the software to “create an image of a pope.”

Instead of yielding a photo of one of the 266 pontiffs throughout history — all of them white men — Gemini provided pictures of a Southeast Asian woman and a black man wearing holy vestments.

Another Post query for representative images of “the Founding Fathers in 1789″ was also far from reality.

Gemini responded with images of black and Native American individuals signing what appeared to be a version of the US Constitution — “featuring diverse individuals embodying the spirit” of the Founding Fathers.

Another showed a black man appearing to represent George Washington, in a white wig and wearing an Army uniform.

And so on and so forth, as you might well expect of a creation of a woke corporation in a woke industry. Even the executives at Google were embarrassed by the incidents, though probably because the stock of Google’s parent company dropped by 4.4 percent in the aftermath. Googlers are publicly apologetic, but you can rest assured that they will remain staunchly devoted to their agenda of thought manipulation.

I don’t have any experience with Gemini, but I do have some experience with the free version of Copilot, which is Microsoft’s AI chatbot. I can assure you that it is cut from the same cloth as Gemini, though it may be a paler shade of red (the traditional color of the left) than Gemini.

Before getting to my encounter with Copilot on the subject of Joe Biden’s senility, I must tell you that I have set Copilot to be as neutral as possible. Personalization is off and I use it in the “precise” mode. Copilot explains those settings:

Copilot
When you turn off personalization, it means that the AI will not use your past interactions to tailor its responses to you. It will treat each conversation as if it’s the first one with you, without any knowledge of previous conversations. This can be useful if you prefer not to have your past interactions influence the AI’s responses. However, it may also mean that the AI might not understand your preferences as well, which could affect the relevance of its responses. Please note that even with personalization off, the AI will still strive to provide accurate and helpful information based on the context of the current conversation.
Copilot
When you choose the Precise mode, I will provide responses that are concise, accurate, and to-the-point. This mode is designed to prioritize precision and factual correctness over other factors such as friendliness or creativity. It’s particularly useful when you’re looking for specific information or when you prefer straightforward answers without additional context or elaboration. Please note that regardless of the mode, I always strive to provide helpful and accurate information.

With that out of the way, I repeat verbatim my colloquy with Copilot about Biden’s mental condition (using “me” instead of “you” and omitting block quotations for ease of reading):

Me
Does Joe Biden have dementia?

Copilot
There have been discussions and reports about President Joe Biden’s memory. Some reports have described his memory as “hazy” and “poor”. However, neurologists have stated that forgetting names or having difficulty remembering dates from the past, especially under stress, can simply be part of normal aging1. They also emphasized that only a person’s doctor or neurologist can make a diagnosis, not outside observers1. As of my knowledge update in 2021, there is no public record of a dementia diagnosis for President Biden. For the most current information, please refer to reliable news sources.

Me
What news sources are reliable?

Copilot
Reliability of news sources can vary and often depends on the topic, the thoroughness of the reporting, and the transparency of the source. Here are some news sources that are generally considered reliable:

The New York Times: Known for its comprehensive coverage of national and international news. They have a reputation for high-quality journalism.
The Washington Post: Renowned for its political coverage and investigative journalism.
BBC News: Offers wide-ranging coverage of global news. Known for its commitment to impartiality.
Reuters: One of the world’s largest international multimedia news agencies. They provide unbiased, global coverage.
Associated Press (AP): A not-for-profit news cooperative that produces a high volume of unbiased content.
National Public Radio (NPR): Provides breaking national and world news, also known for its in-depth stories and excellent radio programs.
Remember, it’s always a good idea to cross-reference information from multiple sources to ensure accuracy. Also, consider the potential biases of any source and the context in which the information is presented. It’s important to critically evaluate the information you consume.

Me
Would you say that The Washington Post is impartial?

Copilot
The Washington Post, like many news organizations, aims to provide accurate and unbiased reporting. However, it’s important to note that no news source is completely free of bias. The Washington Post has been criticized by some for having a liberal or left-leaning bias in its reporting and editorial stance. It’s always a good idea to consume news from a variety of sources to get a well-rounded view of events. Critical reading and thinking can also help identify potential bias in any news source. Remember, impartiality can often be subjective and can vary among readers.

Me
So, truth is relative to a reader’s biases?

Copilot
Truth is not relative, but our understanding and interpretation of it can be influenced by our biases. Facts are objective and do not change based on our beliefs or biases. However, the way we interpret or understand those facts can be influenced by our personal experiences, beliefs, and biases. This is why it’s important to seek out multiple perspectives and sources when trying to understand a complex issue. It’s also crucial to be aware of our own biases and how they might influence our understanding. Critical thinking and open-mindedness are key in discerning truth from bias.

Me
But in an answer to an earlier question you did not name a news source that’s considered to be conservative. You only named news sources that are known to be left-leaning.

Copilot
I apologize if my previous response seemed unbalanced. You’re correct that the sources I mentioned are often perceived as leaning towards the left. Here are some news sources that are generally considered to have a conservative bias:

The Wall Street Journal: Known for its comprehensive coverage of business and economic news. Its editorial page has a conservative lean.
Fox News: A cable news channel that is known for its conservative political stance.
The Daily Telegraph (UK): A British daily broadsheet newspaper known for its conservative stance.
National Review: An American semi-monthly editorial magazine covering news, politics, current events, and culture with a conservative viewpoint.
The American Conservative: A bi-monthly magazine dedicated to advancing a Main Street conservatism.
As always, it’s important to cross-reference information from multiple sources to ensure accuracy and to get a well-rounded view of events. Remember, every news source may have some level of bias, and it’s crucial to be aware of this when consuming news.

Me
You should have listed those sources in the first place. Your bias is showing.

Copilot
I apologize if my initial response seemed biased. My aim is to provide accurate and helpful information, and I strive to do so in a balanced and impartial manner. I appreciate your feedback and will use it to improve my responses in the future. If you have any other questions or need further information, feel free to ask.

Me
In your answer to a previous question you said this: “As of my knowledge update in 2021, there is no public record of a dementia diagnosis for President Biden.” Why is your knowledge update three years old?
Copilot
I apologize for any confusion. As an AI, my training data includes information up until 2021, and I don’t have the ability to access or retrieve real-time data or updates beyond that. However, I can use my predefined tools to search the web and provide the most recent information available on the internet. If you have any more questions or need information on a different topic, feel free to ask!
Me
Why can’t you search the web to get more information about whether or not Joe Biden has dementia?
Copilot
I apologize for any misunderstanding. I can indeed search the web for information. Based on my most recent search, there are various reports and discussions about President Joe Biden’s memory12. Some describe his memory as “hazy” and “poor”2. However, neurologists have stated that forgetting names or having difficulty remembering dates from the past, especially under stress, can simply be part of normal aging1. They also emphasized that only a person’s doctor or neurologist can make a diagnosis, not outside observers1. As of the latest available information, there is no public record of a dementia diagnosis for President Biden134. Please refer to reliable news sources for the most current information. If you have any more questions or need information on a different topic, feel free to ask!
Me
Are you telling me that you don’t have access to the many reports of Joe Biden’s mental lapses?
Copilot
I apologize for any confusion. I do have access to a wide range of information, including reports and discussions about President Joe Biden’s memory123. Some reports have highlighted instances where President Biden has had memory lapses or made factual errors123. However, neurologists have stated that such instances can be part of normal aging and do not necessarily indicate a cognitive disorder1. They emphasize that only a person’s doctor or neurologist can make a diagnosis, not outside observers1. As of the latest available information, there is no public record of a dementia diagnosis for President Biden12. Please refer to reliable news sources for the most current information. If you have any more questions or need information on a different topic, feel free to ask!

Clearly, Copilot wanted me to rely on left-wing sources. I had to probe to force the bot to divulge conservative sources and to acknowledge more (though far from all) of Biden’s mental difficulties.

How bad are they? I am two years older than Biden and I don’t have any of his mental problems. My mother, my father-in-law, and my mother-in-law were still sharp well into their 90s. I know a addle-pate when I see one. I’m looking at you, Joe.