Welcome to the 9th issue! The latest debate in AI is laws and copyrights, and this issue covers a range of related topics. These include copyright lawsuits against AI companies, the ethics of AI, and how AI is changing education. The newsletter aims to provide a comprehensive overview of the latest news, articles, and tools related to AI in education.
Events & Activities:
Practical Integration of ChatGPT in Education - Final call to join the practical workshop co-orgranized by KEEP, ELITE, ITSC, and AI in Education to know from the practitioners how to incorporate ChatGPT into teaching methodologies.
News and Interviews:
AI Learned from Their Work. Now They Want Compensation. - Comedians, novelists and filmmakers are suing tech companies over their use of copyrighted works to train artificial intelligence (AI) tools, such as OpenAI’s ChatGPT and Dall-E, Google’s Bard and Stability AI’s Stable Diffusion. AI companies have argued that the use of copyrighted works falls under the fair use provision of copyright law. The wave of lawsuits and proposed regulations could pose the biggest barrier yet to the adoption of generative AI tools.
Stability AI Co-Founder Accuses Company of Tricking Him into Selling Stake for $100 in Lawsuit - Cyrus Hodes, a co-founder of Stability AI, has filed a lawsuit against the company and its CEO, Mohammad Emad Mostaque, in the US District Court, alleging that Mostaque willfully deceived him about the value of the company. Hodes sold his 15% stake in the company for $100, which has since gone on to achieve a valuation of $4bn.
Google Hit with Lawsuit Alleging It Stole Data from Millions of Users to Train Its AI Tools - Google, along with its parent company Alphabet and AI subsidiary DeepMind, has been sued for allegedly violating copyright laws and scraping data from millions of users without their consent to train its AI products. The complaint accuses Google of stealing "virtually the entirety of our digital footprint," and using it to train products such as its chatbot Bard. Google called the claims "baseless."
What Sarah Silverman's Lawsuit against OpenAI and Meta Really Means - Comedian Sarah Silverman is suing OpenAI and Meta for copyright infringement, claiming that her book, as well as two other plaintiffs' works, were used to train the AI models ChatGPT and Llama without their consent. The lawsuit claims that the AI models generate summaries of copyrighted works, which can only be done if they were trained on them. Data scraping practices have become a contentious issue for the development of large language models, with OpenAI already facing two other lawsuits claiming unlawful copying of book text and privacy violations. Other legal actions are expected as the debate continues.
Articles & Blogs:
Generative AI Meets Copyright – Copyright lawsuits are underway in the US, and if the plaintiffs prevail, it could limit the use of generative AI systems to public domain works or licensed content. The lawsuits focus on Stable Diffusion and Codex, among other generative AI technologies, and rulings in favor of the plaintiffs could trigger a shift in developers' bases of operation to countries with more favorable laws. US Congress has held its first hearing on generative AI and copyright issues, and the US Copyright Office is seeking input from stakeholders on key questions related to the use of copyrighted works in generative AI systems.
Ethics of Artificial Intelligence - UNESCO has produced the first global standard on AI ethics, the "Recommendation on the Ethics of Artificial Intelligence," adopted by all 193 Member States. The core values include respect for human rights and dignity, living in peaceful societies, ensuring diversity and inclusiveness, and flourishing environments and ecosystems. The ten core principles lay out a human-rights centered approach to AI, and it also sets out eleven key policy areas for actionable policies.
How Universities Can Foster AI Literacy in Higher Education While Solving Challenges – AI literacy is essential in higher education to navigate ethical complexities and harness the potential of generative AI tools responsibly. Institutions should establish clear policies, provide dedicated oversight, and promote equity and access to AI resources in order to mitigate risks and ensure responsible AI tool usage. Integrating AI-related topics into curricula, fostering collaboration, and conducting research are key strategies for promoting AI literacy and adoption in higher education.
Artificial Intelligence Is Already Changing How Teachers Teach - Artificial intelligence (AI) is changing how educators teach as it is used to develop tests, generate case studies, write emails, and rethink teaching strategies. Teachers are increasingly using AI to enrich learning, spur creativity, and save time on routine tasks, but some schools have blocked access to it due to concerns about student learning and cheating.
In Education, 'AI Is Inevitable,' And Students Who Don't Use It Will 'Be at A Disadvantage': AI Founder - According to Julia Dixon, founder of ES.AI, students who do not use artificial intelligence (AI) in their college application process and education will be at a disadvantage. The use of AI to write essays and do homework has been criticized as cheating, but Dixon argues that it is not cheating as long as ethical practices are used, and AI-generated work is not submitted as a final product. She believes that using AI is similar to using a human tutor and can help improve students' access to tutors and educational resources. Dixon hopes that products like ES.AI will help students make AI work for them and not become a replacement for them.
The Risks of AI are Real But Manageable – The risks created by artificial intelligence can seem overwhelming but are manageable. History shows new technologies introduce threats that eventually get controlled. AI will disrupt jobs but policies can reduce impact. AI's hallucinations and biases reflect data it learns from. Educators can use AI to improve student writing and critical thinking. Governments need expertise to regulate AI. Companies must develop it responsibly. Citizens should follow AI developments to have informed debates. The benefits could be massive if the risks are managed well, as was done with past innovations.
The World's Most Powerful AI Model Suddenly Got 'Lazier' and 'Dumber.' A Radical Redesign of OpenAI's GPT-4 Could Be Behind the Decline in Performance. – Users of OpenAI's GPT-4 have reported degraded performance, with some calling the model "lazier" and "dumber" compared with its previous reasoning capabilities. Industry insiders are speculating that OpenAI may be creating several smaller GPT-4 models, known as "Mixture of Experts," that would act similarly to the large model but be less expensive to run. OpenAI has not responded to requests for comment.
Tools/Resources:
Disclaimer: AI in Education has no affiliation with any highlighted free or commercial products in this section.
Claude 2 – Anthropic has announced the release of Claude 2, its new AI model, which boasts improved performance, longer responses, and an API for businesses. The model can be accessed via a public-facing beta website, claude.ai, and has been enhanced for coding, math, and reasoning, achieving a score of 76.5% on the multiple-choice section of the Bar exam. The improvements have been made while also increasing the safety of the model, which now has an enhanced ability to produce harmless responses. The model is available for use in the US and UK, and the company is working to make it more globally available.
Research Rabbit – ResearchRabbit is a platform aiming to empower researchers by building tech that enables every step of their research. The platform provides a novel way of searching for papers and authors, monitoring new literature, visualizing research landscapes, and collaborating with colleagues. ResearchRabbit aims to empower researchers and reimagine research, and achieve this vision with the support of the research community.
---
Disclaimer: The views and opinions expressed in the linked posts are those of the speakers and or their entities and do not necessarily reflect the views or positions of the project AI in Education, Centre for Learning Enhancement And Research, and The Chinese University of Hong Kong.
Comments