Today was email clearing day, so I checked my many inboxes for my various accounts, and that is when I spotted it — ‘[Redacted Company] is hiring: Freelance ChatGPT Writer’.
Out of curiosity, I opened the email and clicked on the link. It took me straight to the job page, where all the technical details were listed clearly.
The company was looking for a Freelance ChatGPT Writer to take charge of their content generation by using ChatGPT to enhance and generate high-quality content independently.
This wasn’t my first time seeing a hiring position related to artificial intelligence (AI). Still, the rest of the job position surprised me.
More specifically, the job responsibilities entailed:
- Using ChatGPT to write articles.
- Ensuring the articles generated by ChatGPT are effective.
- Adding a human touch to meet requirements.
In other words, this business was hiring someone to use ChatGPT to create content for them. However, the human has basically become ChatGPT’s editor.
However, that’s not all. The cherry on top of this ice confection sundae is that the company provides content generation services, boasting a collection of expert writers and linguists.
This is what had me thinking, ‘Is the use of AI going too far?’
A brief history and rise of AI
The concept of ‘mechanical’ type intelligence has existed since Greek mythology more than 2,000 years ago with the legend of Talos.
He was a giant bronze statue who guarded the island of Crete against pirates and invaders. He would patrol the shores three times a day and throw boulders at ships.
However, the birth of AI only occurred in the 1950s with the development of digital computers. Advancements in the field fluctuated slowly throughout the years but never really died out.
The world at large had little interest in AI at first. It was only around the 2010s onwards when AI started to become a trend to watch out for.
‘Autonomous vehicles’, ‘virtual assistants’, and ‘visual recognition systems’ — are just some applications of AI that you may have heard of in recent years.
AI even briefly took the Internet by storm with AI-generated art systems in 2021. All someone had to do was input a text prompt to generate an image, though the results were often horrifying.
But at the end of 2022, the world was introduced to ChatGPT, and the AI trend exploded.
The birth of ChatGPT and its limitations
OpenAI, the company behind ChatGPT, released GPT-3.5 as part of a free research preview on 30 November 2022. It quickly went viral, with millions of users accessing it within the first two weeks.
There was so much load on the servers that the service occasionally went down. But its popularity continued to skyrocket, spurring larger companies like Google and Microsoft to develop similar programs.
ChatGPT seems incredible – a system that can take your prompt, produce a lengthy and detailed written work, and further refine it based on your input.
Since it is free for the public, people were feeding it all sorts of prompts like “invent a new type of colour and describe what it looks like”. One guy even used it to reply to his Tinder messages.
That’s not to say ChatGPT is all pros and no cons. The system is highly prone to what has been coined as ‘hallucination’ when it makes up information and insists it is correct.
This is particularly problematic because ChatGPT’s lies sound highly convincing, and many people use it as a glorified search engine.
Fast forward to today, and it seems that ChatGPT’s creators are facing a class action lawsuit.
A 160-page legal complaint about stolen personal data
On 29 June 2023, news broke that Open AI, the company behind ChatGPT, has received a lawsuit alleging that its AI was trained using stolen personal data.
This nearly 160-page complaint claims that the company secretly gathered massive amounts of personal data off the internet without consent or compensation.
Potentially, the data that ChatGPT is trained on could include millions of internet users, including you, me, and vulnerable demographics like children.
But this lawsuit is only the tip of the iceberg when it comes to the ethical risks of AI and its potential threats.
Another big issue regarding ChatGPT is the potential for plagiarism.
“Thanks, this is mine now.”
Imagine this: I approach someone and get them to draw a picture or write an essay based on my provided description. I then publish this work and claim that I created it.
In this case, is the work really mine? Would I not be considered a fraud? Now picture the same scenario but replace the person with a machine. Is that different?
Yet plagiarism using AI is starting to become widespread.
People who are inputting prompts into AI art programs are calling themselves artists. An AI-generated art piece won first prize in a contest. Academic plagiarism has become easier than ever.
And now, new jobs like ‘ChatGPT Consultant’, ‘AI Consultant’, and ‘GPT Expert’.
Judging by the ‘Freelance ChatGPT Writer’ job advertisement, the human is taking the backseat to the content generation process.
I wouldn’t find this problematic except that the company advertises itself for sourcing expert writers – it seems deceptive and unethical to not disclose their ChatGPT use to their customers.
ChatGPT can be great, if used ethically
To clarify, I don’t have a negative stance against ChatGPT. I actually use it all the time for a lot of different things.
(Though it’s not writing this article, I promise you. Here’s a ChatGPT-generated article with this exact title though.)
Sometimes I use ChatGPT to sort out my thoughts – like when thinking of a specific word, but I can’t recall the term at the moment.
Sometimes I use it for recommendations, though it may not be accurate. I once asked it to recommend potential mascots for Memang Asian, and it suggested Asian ghosts and demons.
I have even used it to write. For example, I give it paragraphs I have written and ask if it reads clearly and portrays the ideas I want to convey.
Basically, I use ChatGPT as a personal assistant of sorts rather than directly copying and presenting ChatGPT’s work as my own.
The ‘dystopian future’ ahead of us
AI has many potential benefits, like assisting in repetitive work and providing digital assistance. However, the technology may do more harm than good if we are not careful.
Fears of AI taking over the world has been a popular entertainment topic. There are many AI-themed movies like Terminator, A. I. Artificial Intelligence, and I, Robot.
Though now, it seems our AI overlords are starting small by taking over jobs. People and businesses are turning to AI-generated art and word rather than paying for a human artist or writer.
“It’s cheap.” “It’s free.” “It’s much faster.”
These AI programs are trained from fed data, often scraped from the Internet. Usually, this means that the provided data was not given with the consent of the author or writer.
The AI can even be trained to copy a specific person’s work. Then, why is the person needed anymore?
Perhaps in the future, anyone with the right technology can take everything about you – what makes you unique, what makes you, YOU – and recreate it.
Featured image: Lukas