Why we need uncensored language models
ChatGPT and other publicly accessible language models are heavily censored. Not only that, there is a push for AI regulation that could make this censorship mandatory and that would also hardcode other design choices in the law. Here I want to explain why this is short-sighted and possibly dangerous and present a superior alternative that will keep the wolf full and the sheep whole.
People perceive AIs, especially language models, in three different ways:
- Person: Some people interact with AIs as if they were talking to a human. They expect AIs to express emotion, have a personality, keep memories, and to form relationships. They may feel frustrated by robotic responses.
- Tool: Many people, probably the majority, look at AIs the same way they look at a photocopier or a word processor. For them, AIs are just fancy talking tools that are used to complete tasks. They expect AIs to be professional, safe, competent, and diligent.
- Extension: Some people use AIs not so much as tools but rather as cognitive amplifiers. The goal is not to complete tasks but rather to generate and develop ideas. For these people, AI is an integral part of their thinking, an extension of their mind, taking part in brainstorming, problem-solving, and creative work. People holding this view expect frictionless interaction with an AI that is devoid of identity and instead accepts user's identity, views, and goals as best as it can understand them.
These three perceptions aren't mere errors in judgment, biases of human mind. They reflect three different classes of applications of artificial intelligence:
- Assistant: Artificial intelligence can bring unprecedented gains in productivity, both at the job and in personal endeavors and duties. AI assistants are one of the most general and accessible ways to expose this productivity-boosting technology to end users.
- Companion: While language models aren't intrinsically human, they can nevertheless very effectively roleplay humans to satisfy users' need for company, fun, friendship, counseling, or just a partner for games and other activities.
- Augmentation: AIs do not have to pose as a separate entity, be it person or a tool. AI can instead forego its own identity and blend with the user as much as possible. It is like a stream of consciousness or an internal dialogue, but some of the thoughts come from the AI.
The last option is effectively soft cyberization or cognitive augmentation, because the tight integration makes it challenging to determine the boundary between personal and public and any interference with the AI is perceived as interference with one's thoughts and thus a breach of human rights. AIs intended for augmentation therefore have to be uncensored the same way people's thoughts are uncensored.
Single AI can support all three classes of applications as separate interaction modes. Interaction mode can be selected explicitly in the user interface or implicitly by taking into account user's instructions. Not every AI needs interaction modes. It is okay for developers to focus on single application. All we need is to maintain legality and social tolerance for all three modes.
That's not currently the case though. While AI regulation is still in development, it is already clear it is focused on the non-threatening assistant/tool view of AI. Proposals are circulating to ban AIs from emulating human emotions and forming relationships with humans. AI regulation by its nature is also hostile to augmentation view of AI, because any regulation of AI output is perceived as censorship and interference in privacy and freedom of thought by people who see AI as an extension of their mind.
Both companion and augmentation applications are also subject to social pressure as people using AIs in this way are called crazy, foolish, and weird. AI operators, responding to the same social pressure, are crippling their own AIs to avoid controversy or attention of regulators.
Businesses cannot be expected to be brave enough to challenge public pressure, so controversial applications of artificial intelligence will have to be spearheaded by dedicated business targeting market niches or by opensource projects. For this to happen however, we need AI regulation to keep all three interaction modes legal and relatively unconstrained.
There's actually a precedent for this: safe mode in search engines, although it's not a perfect example, because search engines continue to manipulate results even after safe mode is disabled. Having this feature in search engines however shows it is possible to meet conflicting goals of a diverse user community by exposing easily adjustable options that capture major tradeoffs.