alexxkay: (Default)
[personal profile] alexxkay
Most of you are probably familiar with the famous Milgram experiments in obedience. Though fascinating, they raised severe ethical concerns which have led to that line of research being largely abandoned in recent decades. It may be re-opening again, though. Researchers have found that performing these experiments in a virtual setting produces similar results.
Our results show that in spite of the fact that all participants knew for sure that neither the stranger nor the shocks were real, the participants who saw and heard her tended to respond to the situation at the subjective, behavioural and physiological levels as if it were real. This result reopens the door to direct empirical studies of obedience and related extreme social situations, an area of research that is otherwise not open to experimental study for ethical reasons, through the employment of virtual environments.

I recommend reading at least as far as the section titled "Speculations on Obedience in Virtual Reality", which reveals (to me, at least) some interesting blind spots in the experiment. First they say
...the problem of major deception that arose in the original experiments by Milgram was avoided here – since every participant knew for sure that the Learner was a virtual character, and therefore no one could believe that they were inflicting pain on anyone else.
But then they reveal how this experiment was described to the participants:
...they were told: “Thank you for taking part in this experiment. As part of our research program a virtual character has learned a set of word-pair associations. The learning is sometimes not exact, but we are testing a reinforcement learning procedure, to see if the infliction of discomfort motivates her, the virtual character, to remember the word-pair associations better.” The Learner had a quite realistic face, with eye movements and facial expressions; she visibly breathed, spoke, and appeared to respond with pain to the ‘electric shocks’. Not only that but she seemed to be aware of the presence of the participant by gazing at him or her, and also of the experimenter - even answering him back at one point (“I don't want to continue – don't listen to him!”). Finally, of course, the electric shocks and resulting expressions of discomfort were clearly caused by the actions of the participants.
To someone who gets most of their knowledge about AI from the movies (which probably describes most of their participants), it's not clear to me that this virtual actor would be perceived as "not real". If the participant thinks they are causing real pain to a real (if computerized) individual, does that actually avoid the original ethical issues?

Someone at work forwarded me this article, which has obvious implications to game design...

Profile

alexxkay: (Default)
Alexx Kay

March 2026

S M T W T F S
1234567
891011121314
15161718192021
22232425262728
293031    

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags