AI's Double-Edged Sword: A Personal Experience and Ethical Concerns
Editorial opinion— risk vs. benefits of artificial intelligence for creative workers
I recently encountered a compelling argument for using AI in specific aspects related to writing online. It's particularly useful to navigate Substack's often gnarly interface and documentation. While I acknowledge the usefulness of some AI-generated information, I remain deeply concerned about ethical implications stemming from my understanding of Large Language Model (LLM) training. Initially, I was adamant against using such methods in my writing process.
I still refuse to generate writing as such, but like others, I am finding specific situations where AI works more effectively than human counterparts.
A desperate moment and AI's unexpected utility
A week-long struggle with Network Solutions and Substack's frustrating tech support, both human and AI, pushed me to my breaking point. Neither entity addressed my specific, known issue, a problem caused by Substack's own protocols. In desperation, I turned to ChatGPT with a precise prompt. Within ten seconds, I received a detailed, step-by-step solution, including instructions for Network Solutions, Substack, and a third-party tool. Following these directions, I resolved my issue in twenty minutes. This experience was eye-opening, and I now consider AI a valuable option for IT support.
Ethical quandary: Acknowledging AI's origins
Despite this success, I remain acutely aware of ethical concerns surrounding AI's development. The initial surge of AI-generated content was built on the uncompensated work of artists, writers, and creatives. The rollout was done with no thought or concern for anyone or anything but the almighty dollar and the prestige of being first. Perhaps, in retrospect, OpenAI and a few other players could have chosen to be more prudent—maybe not. I can’t align with people who condemn the entire idea without understanding it or with people who hail it all as the second coming simply because they make their living in or are enamored of tech.
This is not a matter of opinion but a fact substantiated by my experiences at US Copyright Office seminars in 2023 plus extensive research and awareness since. We are all prone to situational amnesia, but it's crucial to remember the origins of this technology.
Broken system: Tech support and corporate neglect
My recent tech support ordeal highlighted a systemic issue: the lack of communication and collaboration within and between online platforms. Each company jealously guards its niche, leaving customers in the dark. Tech support personnel are often ill-equipped, relying on scripted responses rather than genuine problem-solving.
In my case, ChatGPT identified a known Substack glitch with a readily available solution, a solution Substack itself should have provided. Network Solutions, too, offered no viable alternatives, and domain tech is their only business. Sure, I could hire a guru, but partnering with IT experts is financially unfeasible for many creators. Tech companies, including Substack, prioritize deep-pocket clients, neglecting real-time support for average users. This leaves non-tech-savvy writers, who often pay to use these platforms, stranded when they need help.
The inevitability of AI and the need for adaptation
I understand the anger and frustration surrounding AI's rapid deployment in 2023, which caused significant harm to creative workers. However, though I feel those feelings too, this technology is here to stay. Attempts to reverse its progress are futile. Instead, we have to find ways to adapt, a reality Darwin understood well. OpenAI and other major players are unlikely to rectify the damage they've caused, so it's on us, moving forward, to keep ethics in the spotlight when we look to these technologies.
A call for a balanced perspective
I cannot align with those who blindly condemn AI without understanding its nuances, nor with those who uncritically embrace it. AI is neither inherently good nor evil. Its impact depends on how it is used. I have written extensively on this topic, relying on strong primary sources. Every creative worker has a responsibility to understand the full picture before forming an opinion. Do some homework.
Mindfulness and pragmatism
While I acknowledge AI's potential benefits, particularly in practical problem-solving, I advocate for mindfulness. It's essential to remain aware of the ethical compromises made in developing this technology. If you're a conscientious, creative worker concerned about all creatives, proceed with caution and develop a clear understanding of the technology's implications. All that glitters, as they say.
Be advised that this publication and its publisher seriously reject the practice of stories, articles, or other writings with AI and publishing them as original human work.
Thank you Maryan for the thoughtful commentary. Trade offs. Always trade offs.
My default mentality is slow to support more regulation. In this complex case however it sure seems like we need some safeguards.
My selfish concern is AI generated content being passed off as authentic, old school real writing by real people. What if, like with food ingredients, there were labeling requirements for creative work? Some sort of disclosure requirement that made it clear if (maybe even how much) AI content was included.
I'm with you. I have found AI incredibly useful for tasks that would take me hours, like scouring the web to gather information or some kinds of copywriting for media. But I would never use it for my creative work or anything I consider my IP. I also would never upload my IP (such as a novel or story) for it to help me (as some writers do for generating cover copy or market research), even knowing anything I've published digitally has probably been used without my knowledge or consent. It is astounding how quickly everything is pirated. I hope more people will have these conversations and we'll see AI develop as a useful tool for us all without an ongoing threat to human IP.