by Jen, in conversation with Magic Write
After working through this week’s readings I asked Magic Write, the AI inside Canva, to “discuss the hopes and harms of AI.” I’ve been curious about the existence of an AI inside Canva (a popular tool for generating social media content, marketing material, and even educational content); Canva isn’t the first place I’d look for an AI, but here we are. (The free version of Canva currently allows 25 Magic Write prompts; you need to pay to use the AI more.)
Magic Write’s response was:
AI has the potential to bring many benefits to our society, such as increased efficiency, improved accuracy, and the ability to tackle complex problems. AI can also save lives by improving medical diagnoses and drug discovery.
However, there are also concerns about the potential harms of AI, including job displacement, biases in decision-making, and the loss of privacy. AI could also be used for malicious purposes, such as cyber attacks or autonomous weapons.
Therefore, it is important to carefully consider the ethical and societal implications of AI and ensure that it is developed and used responsibly.
In a five-sentence response, I’m moderately impressed by the number of points the AI included. As a baseline, Magic Write is really good at a short answer question. However, it was interesting to me that there was no explicit mention of images created by AI, and so I followed up with, “What are the potential uses and abuses of image creation through AI?”
Magic Write’s response was:
The potential uses of AI-generated images include assisting in the creation of realistic and detailed graphics for various industries such as gaming, film, and advertising. However, the technology also raises concerns about the possibility of creating fake images for malicious purposes such as propaganda, phishing, and deepfakes. It is important to consider the ethical implications and potential misuse of AI-generated images.
One thing that was interesting to me in this week’s readings was a sense that the current wave of AI gives us something new. I don’t entirely disagree, but when Nitasha Tiku laments that “these systems produce images that can cause a range of harms, such as reinforcing racial and gender stereotypes or plagiarizing artists whose work was siphoned without their consent,” I couldn’t help but think about the ways that’s been happening for hundreds of years. AI certainly makes it faster, but anyone who has wanted to perpetuate harm in those ways has been doing it pretty successfully for a long time.
I was interested in thinking more about what, exactly, is new about AI; while reading about surveillance technology in proctoring programs and in grading, I agree that it’s all really sinister, but was curious about my inclination that this is just a new spin on an old trick. I asked Magic Write, “What are the uses of AI in surveillance of students through grading and proctoring?”
The response was:
The use of AI in surveillance of students through grading and proctoring can help to ensure academic integrity by detecting potential cheating and plagiarism. It can also provide a more objective and consistent evaluation of student work. However, it is important to consider privacy concerns and the potential for bias in the AI algorithms.
This resonated with me because of the statement that AI “can help to ensure academic integrity by detecting potential cheating and plagiarism.” That has literally always been the goal of any kind of proctoring and grading; aside from the fact of whether AI does it better than humans (which is probably debatable), I think we need to address the premise of this first: can any kind of grading and proctoring ensure academic integrity, or is academic integrity a biased and flawed notion in itself?
And, moving backwards, does AI increase the possibility of malicious image use, or have visual depictions always held the possibility of being used for malicious purposes?
Proceeding back to where I started, with the hopes and harms of AI, Magic Write’s response about the hopes of AI name a number of benefits that have value-laden assumptions built into them: efficiency, accuracy. I won’t argue that medical innovation is great, but we’ve always seen bias in the reality of medical innovation…for WHO?
While I’m open to being convinced otherwise, I don’t actually think that AI is creating more harms for us; AI is a mirror of the world we have already created (and fed into it), and I think it is just magnifying existing harms while reflecting them back onto us. Instead of being afraid of this, should we be grateful for the clear evidence of where things have gone wrong, and use this reflection that AI creates as a way to plan out where the work needs to be done?
These are great reflections, Jen, and I love that you grounded them in some experimentation with an AI chatbot. Super interesting. I think your questions about what is new about AI are really important, and whether AI is really the problem or rather the systems that we’re starting with (grading, proctoring, plagiarism detection etc). Nowviskie’s piece really has me thinking not so much about what AI is or can do, but rather, **what are the datasets we are creating and how will they be interpreted by machine learners rather than human learners**?
Let’s talk more about it this afternoon.