Robert Važan

Why we need uncensored language models

ChatGPT and other publicly accessible language models are heavily censored. Not only that, there is a push for AI regulation that could make this censorship mandatory and that would also hardcode other design choices in the law. Here I want to explain why this is short-sighted and possibly dangerous and present a superior alternative that will keep the wolf full and the sheep whole.

People perceive AIs, especially language models, in three different ways:

These three perceptions aren't mere errors in judgment, biases of human mind. They reflect three different classes of applications of artificial intelligence:

The last option is effectively soft cyberization or cognitive augmentation, because the tight integration makes it challenging to determine the boundary between personal and public and any interference with the AI is perceived as interference with one's thoughts and thus a breach of human rights. AIs intended for augmentation therefore have to be uncensored the same way people's thoughts are uncensored.

Single AI can support all three classes of applications as separate interaction modes. Interaction mode can be selected explicitly in the user interface or implicitly by taking into account user's instructions. Not every AI needs interaction modes. It is okay for developers to focus on single application. All we need is to maintain legality and social tolerance for all three modes.

That's not currently the case though. While AI regulation is still in development, it is already clear it is focused on the non-threatening assistant/tool view of AI. Proposals are circulating to ban AIs from emulating human emotions and forming relationships with humans. AI regulation by its nature is also hostile to augmentation view of AI, because any regulation of AI output is perceived as censorship and interference in privacy and freedom of thought by people who see AI as an extension of their mind.

Both companion and augmentation applications are also subject to social pressure as people using AIs in this way are called crazy, foolish, and weird. AI operators, responding to the same social pressure, are crippling their own AIs to avoid controversy or attention of regulators.

Businesses cannot be expected to be brave enough to challenge public pressure, so controversial applications of artificial intelligence will have to be spearheaded by dedicated business targeting market niches or by opensource projects. For this to happen however, we need AI regulation to keep all three interaction modes legal and relatively unconstrained.

There's actually a precedent for this: safe mode in search engines, although it's not a perfect example, because search engines continue to manipulate results even after safe mode is disabled. Having this feature in search engines however shows it is possible to meet conflicting goals of a diverse user community by exposing easily adjustable options that capture major tradeoffs.