top of page
Writer's picturenarmin nahidi

AI, Friend or Enemy?!

The Centre of Collaborative Learning (CCL) is an educational institution that provides students with opportunities to work together and learn from each other in a supportive environment. In addition, this centre is a virtual place for lecturers and academic scholars to transfer information including but not limited to educational information, modern learning systems, resources and many more.

Benefiting from AI, this report discusses a case study on the deconstructive impact of Chat GPT (Chat Generative Pre-Trained Transformer) on assessment and evaluation in higher education. Higher education institutions are constantly exploring ways to improve their assessment and evaluation processes to ensure that students receive accurate and fair evaluations of their knowledge and skills. The rise of AI and language model tools like ChatGPT has sparked interest in using these technologies to support assessment and evaluation.

ChatGPT is a newly invented AI that is a free language-modelled tool trained on a wide range of educational topics to ensure that it can provide students with accurate and relevant answers to their questions. The chatbot is designed to understand students' questions and provide them with step-by-step instructions, links to relevant resources, and expert opinions from other students. To encourage collaboration, the chatbot is programmed to ask students if they would like to share their answers with their peers and provide them with an opportunity to connect with other students working on similar projects. The chatbot has significantly improved students' access to information and has helped them find answers to their questions faster.


The Alarming Consequences of AI in Higher Education

Before I explain the problem, I need to review and compare the traditional and modern evaluations in higher education. Traditional Assessment methods in higher education include multiple-choice exams, written essays, and other forms of paper-based assessments. These methods are typically used to test students' knowledge of specific course material and are graded manually by instructors or teaching assistants. There are some advantages to simple to administer and grade, testing a large number of students in a short period of time, the results being easy to quantify and compare, and disadvantages like limited in their ability to evaluate critical thinking and problem-solving skills, prone to human error in grading and finally not always an accurate reflection of a student's knowledge and understanding (Dochy et al., 1999; Struyven et al., 2005).

On the other side, we have Modern evaluation in higher education, which involves using technology, including AI and language models, to support assessment and evaluation processes (Avdeeva et al., 2017; Fatkullina et al., 2015). This includes using chatbots powered by OpenAI's ChatGPT to provide real-time feedback and grade written assignments. The benefits are providing real-time feedback and support to students, accurately evaluating writing, including grammar, vocabulary, and content, grading assignments quickly and efficiently, reducing the time and effort required by human evaluators and impartial and consistent evaluations, and reducing the risk of human bias. The drawback would be requiring significant investment in technology and training, which may be perceived as less personal than human evaluations and there is still a need for human oversight and review to ensure accuracy and fairness.

The problem is that traditional assessment methods, such as multiple-choice exams, are limited in evaluating a student's knowledge, and critical thinking skills and the manual grading of written assignments is time-consuming and prone to human error. Traditional assessment methods have limitations, including a focus on factual knowledge and a reliance on manual grading, which is prone to human error. Modern evaluation, with the use of AI and language models, has the potential to overcome these limitations and provide students with more accurate and fair evaluations of their knowledge and skills. However, it is important to balance technology and human interaction to ensure that evaluations are fair and impartial. The problem becomes more critical when the abusage of AI. The chatbot is able to provide students with instant feedback on their writing, allowing them to make improvements in real-time.

Here is how ChatGPT respond to the question “Can students misuse ChatGPT?”

“Students can misuse ChatGPT in the assessment process. Here are a few ways in which this could occur:

1. Plagiarism: Students could copy and paste content from the internet or other sources into their written assignments, and ChatGPT might not be able to distinguish this as plagiarized content.

2. Gaming the system: Students could try to manipulate the chatbot's grading algorithm by using specific keywords or phrases that they believe will increase their score.

3. Cheating: Students could try to use ChatGPT to cheat on exams or other assessments by asking it for answers or using its real-time feedback to improve their scores.

To prevent these types of abusage, it is important to implement safeguards and checks, such as regular human review of the chatbot's evaluations and random checks for plagiarism. In addition, it is important to educate students about the importance of academic integrity and the consequences of cheating or misusing technology in the assessment process.”


Exploring the Advantages of Implementing Human Resource for AI Technology

I can’t say for sure if this is the first time that human resources are preferred to technology, but in my personal opinion, the ability of a human being to detect problems and challenges for the unknown is something that not all AI technologies can understand. We want the evaluation of the students to be fair so to address these challenges and to block students from using AI in the higher education system, institutions can take the following steps:

1. Use plagiarism detection software: By using software such as Turnitin or Copyscape, institutions can detect instances of plagiarism and prevent students from using AI to generate answers or written assignments.

2. Monitor usage of AI tools: Institutions can monitor the usage of AI tools such as language models by tracking IP addresses and tracking students' interactions with chatbots during exams and assessments.

3. Educate students about academic integrity: By educating students about the importance of academic integrity and the consequences of cheating or misusing AI tools, institutions can discourage students from attempting to use AI in the assessment process.

4. Implement secure assessment methods: To prevent students from using AI during exams, institutions can implement secure assessment methods such as proctored online exams or in-person exams in a secure location.

5. Conduct regular human review: Institutions can conduct a regular human review of the results of AI-powered evaluations to ensure accuracy and fairness. This can also help to detect instances of cheating or misuse of AI tools.

Even though the above solutions can help to prevent students from using AI, I believe that assessing students with more written exams rather than online assignments is the better solution for this problem.

Conclusion

In conclusion, preventing students from using AI in higher education assessments requires a multi-faceted approach that includes a combination of technology and educational initiatives. The exam should be reviewed regularly, the assignments should be redesigned differently, and mechanisms intensively controlling the materials' quality should be implemented. Eventually, a position called “Education Material Quality Control (EMQC)” should be created to help higher education institutions improve their students' quality and avoid the misusage of AL. By implementing these steps, institutions can ensure that assessments are fair, accurate, and reliable.


References

Avdeeva, T. I., Dmitriyevna Kulik, A., Kosareva, L. A., Zhilkina, T. A., Belogurov, A. Y., Avdeeva, T. I., Kulik, A. D., Kosareva, L. A., Zhilkina, T. A., & Belogurov, A. Y. (2017). Problems and Prospects of Higher Education System Development in Modern Society. In European Research Studies Journal: Vol. XX.

Dochy, F., Segers, M., & Sluijsmans, D. (1999). The Use of Self-, Peer and Co-assessment in Higher Education: A review. Studies in Higher Education, 24(3), 331–350. https://doi.org/10.1080/03075079912331379935

Fatkullina, F., Morozkina, E., & Suleimanova, A. (2015). Modern Higher Education: Problems and Perspectives. Procedia - Social and Behavioral Sciences, 214, 571–577. https://doi.org/10.1016/j.sbspro.2015.11.762

Struyven, K., Dochy, F., & Janssens, S. (2005). Students’ perceptions about evaluation and assessment in higher education: A review. In Assessment and Evaluation in Higher Education (Vol. 30, Issue 4, pp. 325–341). https://doi.org/10.1080/02602930500099102


28 views0 comments

Recent Posts

See All

SFHEA

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page