Zheng, Jun, et al. “Trust without touch: jumpstarting long-distance trust with initial social activities.” Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 2002.
Summary:
This paper examines trust through computer-mediated communication (CMC). Specifically, does focusing on social interaction, visual identification, and personal information sharing show higher levels of trusts over not having these factors. The results says “yes”, engaging in some form of interaction with a remote person show increased levels of trust where face-to-face interactions show the high forms of established trust. Remote interactions like text chat using photos and/or personal information sheets sharing also who similar levels of established trust to face-to-face interactions. When not having any of these factors, trust is not easily established.
Reflection:
This is interesting work because we are gradually moving to a gig economy where outsourcing and remote workers are becoming the norm. There are a couple of areas I think could benefit from this insight to CMC usage when focusing on trust.
Primary care physicians and remote doctor visits. Primary care doctor visits are now experiencing a shift towards having primary care visits to your home over the internet. Recently, I talked to a friend of mine, who used to be a primary care doctor, that noticed these changes just because of the convenience of staying at home and not traveling when a patient is unwell. Typically, these visits happen over a skype-like session where the patient and doctor have a video chat to discuss and determine said patient’s illness. Something as critical as health care are now showing new ways of performing a diagnosis over the internet. And the level of trust to the doctor giving the diagnosis is high. I wonder if this paper can influence other industries where having face-to-face interaction is critical to the business like health care. Obviously, if I had a doctor visit where I could not see nor hear this doctor I would be high skeptically of any diagnosis given.
Additionally, I want to reflect on a potential danger of becoming too truthful of video-graphic face-to-face conversations. A dark opportunity lies where deep fakes are becoming the norm for political figures, famous entertainers, and other influential folks where trust has been established through various forms of media. With this potential danger, we may have to be taught a specific digital literacy to spot such fakes videos/live stream sessions. Or better, how can we create a new verification method to determine fakes or trolls? Maybe a technology or special ID to identify people on the internet. Something similar to the check marks on twitter and twitch (given to those who reach a threshold of popularity to distinguish against people trying to imitate a person).
Lastly, I feel I should mention a relevant workshop that was held by the Center for Human Computer Interaction in Spring 2018 on Systems of Truth where we pursued the idea of what is truth in computer systems and how can we design for it. If interested you can take a look at the light website created for the event.