Student Ethics Policy
Purpose
This policy ensures that students use AI responsibly, remain aware of its limitations, and maintain academic integrity.
Policy
1. Responsible Use of Generative AI:
Students are encouraged to use AI tools like ChatGPT, DALL-E, and others for learning and research but must do so with caution and responsibility. AI tools are a supplement to personal work, not a replacement for critical thinking, problem-solving, or original content creation. Misuse of AI to generate unearned academic results may lead to disciplinary action.
2. General Principles for AI Use:
- Human Oversight: AI is not perfect and should not be solely relied upon. All AI-generated outputs must be reviewed by students to ensure they are accurate, relevant, and ethical.
- Academic Integrity: Students must use AI tools in compliance with the institution’s academic integrity policies. Submitting AI-generated work without proper acknowledgment is considered a breach of conduct.
- Instructor Rules: Students must adhere to guidelines given by the instructor or the program that the student is a part of. This supersedes general rules of use.
3. Transparency and Acknowledgment:
- Students must always disclose the use of AI in their academic submissions. Transparency is vital; therefore, any assistance provided by generative AI should be acknowledged clearly in submissions (e.g., “This report was assisted by GPT-4 for sentence rephrasing”).
- Proper Citation: Just as students must cite books or journal articles, AI tools must also be appropriately referenced following guidelines set by the institution. Improper or misleading attribution can be considered plagiarism.
4. Ethical Considerations:
- Bias and Fairness: Students should be aware that AI tools may produce biased, outdated, or incomplete information. It is the student’s responsibility to check for any biased content, especially regarding sensitive topics. For instance, AI-generated content could inadvertently reflect gender, racial, or cultural biases.
- Data Privacy: Ensure that sensitive or private data (yours or others’) is not shared with AI tools, as these tools may not comply with privacy regulations like GDPR or institutional data protection guidelines.
- Misinformation: AI may generate factually incorrect information. Students should fact-check AI content against credible academic sources before submission.
- AI-generated art, music, and other creative works often draw from existing creators' contributions, so it's essential to respect intellectual property. Students should credit both the AI model and, if possible, the original creators whose work influenced the dataset. When using AI-generated content in assignments, acknowledge its origins to avoid misappropriating the creators' contributions.
- Moral and Legal Responsibility: Avoid using AI in ways that perpetuate harmful stereotypes or violate intellectual property laws.
5. Understanding Limitations of Generative AI:
- AI tools are limited in context and often generate content without understanding nuanced or complex human information. Outputs may seem correct but could contain factual errors, irrelevant details, or misleading suggestions. Students should not assume that AI-generated content is always accurate.
- Non-Critical Thinking: AI lacks reasoning, creativity, and judgment. Tasks requiring innovation, critical analysis, and original thought should primarily reflect the student’s independent work. AI should only support mundane tasks like summarization or formatting.
6. Academic Guidelines on AI Usage:
- Assignments and Exams: Students are prohibited from using AI tools in assessments or examinations where independent work is expected unless explicitly allowed by the instructor. Collaboration with AI on problem-solving tasks (e.g., math, coding) may be permitted for learning purposes but must be properly documented.
- Collaborative Projects: Unless explicitly denied by the instructor, AI can be used to enhance productivity in group work or research, but all contributions must be transparent and agreed upon by the group. Failing to disclose AI-generated work within a collaborative project breaches academic integrity.
7. Acknowledging AI's Contribution in Research:
- Any research paper, project, or publication where AI tools played a role must acknowledge the tools and their contributions. This includes reports, essays, presentations, and any formal academic submission. It’s not enough to note that AI was used; students must specify how AI influenced the final product.
- Examples of Acknowledgment: “GPT-4 was used to generate initial research summaries,” or “DALL-E was employed to visualize concept prototypes.”
8. Ethical Use in Public Domains:
- Public Sharing: Students should avoid sharing AI-generated content that could perpetuate misinformation or harm others. AI-generated works should not violate copyright laws or other intellectual property rights.
9. Ongoing Review and Policy Updates:
- This policy will be revisited as necessary to adapt to emerging AI technologies and evolving ethical concerns. Students are responsible for staying informed about any changes.
Written by Zoe Carrell, head of the RTC AI Student Group