This week’s readings touched on a couple of conversations I’ve been wrestling with for some time. With the overarching idea being how technology/digital culture influences institutions and workplace dynamics, I found a couple of points of minor contention with both the Tunstall and Walsh pieces. Starting with Walsh, who wrote about Kellogg’s research regarding introducing new technologies/processes and how that impacts the workplace; I was immediately disengaged from the idea when he opened by describing junior employees to be digital natives. The reason it was an immediate red flag for me, aside from my hard belief in the digital native not existing, is that the concept already clashes with some of the notions raised in the Tunstall piece. Tunstall thoroughly addressed how tech is, has been, and will likely always be imbalanced due to the knowledge and truths imbued into the technology. This leads to the question of who has access to these tools or proper training on how to maximize their efficiency with them. Then, the successful method presented, in order to avoid hurting higher executives’ feelings, was simply to take turns shouldering responsibility. When reduced to what it is, it feels almost silly that the explanation for reducing workplace conflict and power imbalances is to…work together. This feels more like a cultural issue being presented by introducing technology, not introducing technology producing new forms of conflict itself.
As for the Tunstall piece, I think it was overall well-presented. I agree with a majority of the piece but have one concern. I feel as though the discourse diverted at some point in the writing from being about technological biases’ impact more broadly to quickly narrowing in on artificial intelligence, drawing on examples like Bina48. I think all the information is relevant and important to consider, but I found it interesting that their presented solution was geared towards equal collaboration with other, less-biased AI. While I understand the idea, and I’ve even begun to use AI in my own position at Lehman— I think as a digital humanist, I’m naturally skeptical and slightly alarmed that the conversation where we’ve re-painted historical human-human dynamics as human-computer, Tunstall framing the dominant technologies created primarily by white men versus Bina48, an AI for an indigenous community, which appears rather almost to be presented as an indigenous AI/computer itself (separate from the community which it represents). While I love abolitionist design approaches, I’m hesitant about the level of sentience these machines might have, and how new conflicts could arise the more we invest in growing AI resources (especially ones with advanced machine learning models encoded). On top of this indigenous metaphor, the article itself is framed using master-slave power dynamics as the underscoring theme representing the relationship we have with technology. This is a recurring metaphor in tech articles; I saw it a few years ago when someone wrote about Amazon’s Alexa and the way we speak to it. I try to navigate away from using slavery as a metaphor, as it is simply not the same.
Thanks so much for this level of critical engagement with the texts, Anthony. I really agree with your assessment of Walsh in that most tech-related issues truly are social/cultural issues—though sometimes they bring something to the fore that might have been otherwise unexplored.
The sentience question is a bit one. Isn’t that a big part of what was alarming/unnerving about the Bing encounters that journalists were having? The possibility of using tech to manipulate human emotions is always present, but so incredibly heightened when it becomes harder to distinguish human from construct…