Creating friendly artificial intelligence is an issue at the intersection of ethics and artificial intelligence (AI) research. Friendliness theory has been developed by some researchers as a model for creating safe, moral AI. This theory has been developed by Eliezer Yudkowsky.

The theory starts from the supposition that in the future, AIs with intellectual and practical abilites vastly superior to human-equivalent will be created. The problem then becomes how these AIs will behave with human beings, and if they will have a morality (if any) similar to ours. Such will be the difference in power that, in Yudkowsky's works, "if the AI stops wanting to be Friendly, you've already lost."

Proponents of friendliness theory make the analogy of raising a teenager; until that person is legally responsible, the parent is responsible for its well-being and for maintaining control and monitoring safety of its actions, and negotiating privileges. Friendliness toward a person goes a long way toward earning their respect; tyranny creates resentment, and causes secretive behaviour. Can a developing mind have a balanced view of its future privileges and responsibilities if it must bear the burden of tyranny?

Critics of the theory point out that there is a political process involved and that the singularity will completely transform power relations in society, and that this strains the analogy. They further point out that government and corporations will not be motivated to produce friendly AI's, and that there are serious flaws in the models.

External links