It's amazing how, by entering a few words into generative artificial intelligence, one can produce a business report, a market analysis, a blog post or a student essay. Technology has made working and studying so easy, or has it?
While generative artificial intelligence (generative AI) has the potential to make work easier, it does carry risks. Key among them is possible plagiarism, which is when you use somebody else's work and pass it off as your own. This could potentially result in lawsuits for copyright violation. It has further been found that generative AI can make false reports.
Journalists are often under pressure to produce attractive content but it is not always easy getting sources. The use of generative AI can be tempting. The Media Council of Kenya believes AI gives journalists opportunities for improving content creation and production, but they must keep adhering to the media code of conduct.
"Journalists can leverage AI as a tool to enhance their research, but the ultimate responsibility lies with them to sift through the information and ensure accuracy," the council's CEO David Omwoyo says. He emphasises the need for journalists to uphold ethical standards while utilising technology.
FALSE EXAMPLES
As the Media Council boss says, anyone producing content is responsible for its accuracy. This point came out strongly during a court case in New York, USA, where a lawyer inadvertently presented fake examples before the judge. The lawyer, Steven Schwartz, asked the generative AI platform ChatGPT to find cases similar to what he was working on. His goal was to show the judge how such cases were decided in the past.
On presenting his arguments, neither the opposing lawyers nor the judge could find records of the cases he cited. In an affidavit filed in his defence, Schwartz accepted responsibility for "not confirming the sources provided by ChatGPT". He vowed to never again use generative AI for legal research without verifying the truthfulness of its output.
The emergence of generative AI is a huge temptation to students eager to submit their class assignments on time. There is no clear picture of how much Kenyan students are using generative AI, but with Kenya being among the top 5 African countries in almost everything to do with ICT, it would not be surprising that AI has already made its way into our schools.
In Australia, a group of eight leading universities are considering a return to traditional methods of testing students, reports the Guardian newspaper. The move comes after several students were caught using generative AI. "Our universities have revised how they will run assessments in 2023, including supervised exams, greater use of pen and paper exams and tests," a university representative is quoted as saying.
The rising cases of employees and students using generative AI have spawned a growing industry of AI Content Detectors. These are tools available online that detect if a piece of writing was produced by generative AI. While some content detectors can be used for free, most require a paid subscription.
AI content detectors are already finding large volumes of fraudulent work. According to Science magazine, a scan of 5,000 neuroscience papers found that as much as 34 per cent of those published in 2020 were likely to be fake or plagiarised. In medicine, the figure was 24 per cent. One might wonder why brain surgeons and doctors get involved in such malpractices, but generative AI is very appealing to busy professionals facing multiple deadlines.
One of the ways generative AI is proving useful is by summarising long documents. Imagine having to draft a one-page summary from a 100-page document. You'll have to spend hours reading the entire text for you to come up with a summary. Generative AI can do it for you in minutes, but ethical problems arise if you fully claim the work as your own.
The Nature Journal reports that ChatGPT is producing fake summaries, which are so good that experts have trouble distinguishing them from those written by humans. In a small experiment, ChatGPT was asked to create a series of summaries from the titles of articles. Human reviewers correctly labelled 68 per cent of the summary articles as generated with artificial intelligence.
Interestingly, the performance of the human reviewers was almost the same as that of AI content detector software. A related experiment showed detection software can outperform humans by flagging as much as 99 per cent of AI-generated text.
In response to allegations that users are abusing generative AI, the developers of ChatGPT, have committed to fighting plagiarism and unethical practices. One of the proposed solutions is to create markers that would identify text from generative AI. The company is also reportedly launching content detection tools that teachers can use to detect class assignments done with generative AI.
The European Parliament has already drafted a set of rules that, if adopted, would force vendors of AI systems to indicate that content is AI-generated. The rules would also help distinguish real pictures from those generated through artificial intelligence. In addition, generative AI platforms would have to disclose which data they use to "train" the software.
NOT ALL BAD
Generative AI is not all bad, though, as a joint team from the Ministry of Agriculture and the International Food Policy Research Institute found out.
In May, the team asked two generative AI platforms – ChatGPT and Google Bard – to draft three key recommendations for the Ministry of Agriculture and the Ministry of Finance. The recommendations were to touch on agriculture, climate resilience, environmental sustainability and gender equity.
The joint team was greatly impressed by the "superb language skills" of the output from both ChatGPT and Bard. In their opinion, the recommendations resemble that which they would have drafted by themselves. AI was not perfect, though.
The biggest problem the team encountered is that generative AI does not show where its information is coming from. As generative AI picks up material from both public and private sources, problems may arise concerning the legality of using others' content without permission. Copyright issues may arise.
"Artificial intelligence can speed up the process to provide faster recommendations, but these still must be vetted with trusted sources and fitted to different contexts," the team observed. In plain language, anyone using generative AI must confirm whether the information coming out of it is accurate.