top of page

AI in Education Bi-weekly Digest - Issue #11



Welcome to the 11th issue! In this edition, we bring you a diverse range of topics, including recent news and interviews, discussing legal rulings on AI-generated art, strategies for schools to effectively integrate AI tools. Additionally, we explore thought-provoking articles on AI cheating in education, the trustworthiness of large language models, and downsized language models paving the way for wider adoption. Stay tuned for exciting insights, tools, and resources that continue to shape the landscape of AI in education.


Upcoming Events:

Online Seminar: Policies and Practices for GenAI in Education (Exclusive to CUHK Community) - This seminar aims to explore best practices for integrating generative AI into classroom activities. Our esteemed guest speakers will also share their valuable insights on mitigating the potential drawbacks of using such AI tools in teaching and learning practices, primarily focusing on discussing CUHK's generative AI policies and the latest developments in VeriGuide in response to AI plagiarism.


Generative AI Workshops - A two-part workshop, hosted and facilitated by Dr. Alice Chui from Lingnan University, guides participants from beginner to advanced levels in utilizing ChatGPT.


News and Interviews:

AI-generated Art Cannot Receive Copyrights, US Court Says - A U.S. court in Washington, D.C. has ruled that artworks created by artificial intelligence (AI) without any human input cannot be copyrighted under U.S. law. The court affirmed the Copyright Office's rejection of an application filed by computer scientist Stephen Thaler on behalf of his AI system called DABUS. Thaler's attorney plans to appeal the decision. The Copyright Office believes the court reached the correct result. The case highlights the emerging intellectual property issues surrounding generative AI. The court stated that human authorship is a fundamental requirement for

How Schools Can Survive (And Maybe Even Thrive) with AI This Fall - As schools grapple with the integration of artificial intelligence (AI) tools like ChatGPT into education, some are banning the technology while others are reconsidering their restrictions. Educators are now seeking guidance on how to effectively utilize AI to support student learning rather than combat cheating. Suggestions include assuming that all students are using AI tools, shifting assessment methods to in-person or proctored exams, and ceasing reliance on AI detection programs that often produce inaccurate results.


While Some Schools Regret How ChatGPT and AI Infiltrated Education, This Top University Is Putting Out the Welcoming Mat with a Major Investment - Johns Hopkins University is launching an institute dedicated to the study of data science, machine learning, and AI systems, in response to the growing influence of AI on everyday life. The university is committed to providing education in AI and data science to all students, ensuring they understand the evolution of their respective fields. The focus is on responsible AI development, transparency, and equity, with an emphasis on the collaboration between humans and AI.


New CityU GPT Chatbot Will Provide CityU Students, Staff and Start-ups with the Most Advanced Integrated Learning Platform - City University of Hong Kong (CityU) is set to launch the CityU GPT Chatbot in the 2023/24 academic year. The chatbot, powered by generative AI (GenAI), will support teaching, learning, and administrative activities at the university. It will also assist eight HK Tech 300 start-ups in expanding outside Hong Kong. The chatbot aims to provide a technology-integrated learning platform and foster a digitally competent generation of students. The initiative is part of CityU's efforts to promote innovative and interactive learning experiences.


Articles & Blogs:

Hoping to Get More of Their Teachers to Try AI, Students Organize a National Conference - Students in the US organized an online conference called AI x Education to familiarize teachers with AI tools such as ChatGPT and encourage their use in classrooms. Over 2,000 educators attended the event, which aimed to address concerns about academic integrity while highlighting the potential benefits of AI in education. A summary report is also published by the students.


AI Cheating Is Hopelessly, Irreparably Corrupting US Higher Education – The growing use of generative AI technology in academia is raising concerns about the decline of critical thinking skills among students. AI-generated essays lack substance and hinder personal development. Educators are struggling to combat AI cheating, with strategies such as rephrasing generators and incorporating AI tools into courses. However, AI detection tools often produce inconsistent results, leading to potential false accusations or allowing cheaters to slip through. The reliance on AI in education threatens the journey of personal growth for students and the ability to discern truth from misinformation in society.


You Can Now Fine-tune OpenAI's GPT-3.5 for Specific Tasks - It May Even Beat GPT-4 – OpenAI has announced that developers can now fine-tune its GPT-3.5 Turbo language model to enhance its performance on specific tasks. Fine-tuning allows users to customize the model's behavior and capabilities by training it on specific data. In some cases, a fine-tuned GPT-3.5 Turbo model can match or even outperform the capabilities of the more advanced GPT-4. Fine-tuning can also help reduce costs by using shorter input prompts. OpenAI plans to introduce fine-tuning capabilities for GPT-4 in the future.


How Trustworthy are Large Language Models Like GPT? – A study by researchers from Stanford, University of Illinois Urbana-Champaign, UC Berkeley, and Microsoft Research highlights that large language models (LLMs) like GPT-3.5 and GPT-4 are not yet trustworthy enough for critical applications. The study evaluated the models on various trust perspectives including toxicity, stereotype bias, privacy, and fairness. While the models have reduced toxicity, they can still generate toxic and biased outputs and leak private information.


The Long and Mostly Short of China's Newest GPT - China's Beijing Academy of Artificial Intelligence (BAAI) has downsized its language models, launching the Wu Dao 3.0 series of open-source models. While the previous Wu Dao 2.0 had 1.75 trillion parameters, the new models are smaller and more efficient. The Wu Dao 3.0 Aquila models include a chat dialogue model, a text-to-code model, and vision models for computer vision tasks. The downsizing trend is aimed at enabling Chinese startups and smaller entities to adopt generative AI applications and overcome challenges such as high costs and chip sanctions.


Tools/Resources:

Disclaimer: AI in Education has no affiliation with any highlighted free or commercial products in this section.

Hugging Face Introduces IDEFICS, Open GPT-4 Styled MultiModal - Hugging Face has introduced IDEFICS, an open-access visual language model based on DeepMind's Flamingo. IDEFICS can process combinations of images and texts to generate coherent textual responses. It has been trained on publicly available datasets and performs on par with the proprietary Flamingo model. IDEFICS is available in two variants, with 9 billion and 80 billion parameters respectively. This release highlights the current limitations of multimodal capabilities in other models such as OpenAI's ChatGPT.


Eightify – Get key ideas of Youtube videos instantly from this Chrome extension.

---

Disclaimer: The views and opinions expressed in the linked posts are those of the speakers and or their entities and do not necessarily reflect the views or positions of the project AI in Education, Centre for Learning Enhancement And Research, and The Chinese University of Hong Kong.


70 views0 comments

Recent Posts

See All

Comments


bottom of page