Skip to Main Content

Operational AI Guidelines

The University of Montevallo Operational AI Subcommittee has been exploring the potential applications of generative artificial intelligence (AI) within various departments or divisions across campus. Through committee meetings and a comprehensive campus-wide survey, we are identifying potential use cases and developing best practices for responsible AI implementation. 

Abstract Artificial Intelligence ImageThis document aims to serve as a preliminary guide for University of Montevallo faculty and staff as they begin to explore and consider the integration of AI tools into their operational workflows.

I. Unleashing AI’s Potential: Identified Use Cases

The following are some initial use cases identified through preliminary research and committee discussions: 

Streamlining Administrative Tasks: 

        • Automating routine tasks such as data entry, scheduling, and report generation. 
        • Generating draft emails and communications. 
        • Assisting with student onboarding processes. 
        • Simplifying travel and expense reimbursement procedures. 

Enhancing Communication: 

        • Generating personalized communication templates for students, faculty, and staff. 
        • Assisting in the creation of marketing and recruitment materials. 
        • Improving the efficiency of customer service interactions. 

Improving Decision-Making: 

        • Analyzing large datasets to identify trends and patterns. 
        • Providing insights for resource allocation and strategic planning. 
        • Supporting predictive modeling for student success and retention. 

Streamlining Research and Development: 

        • Assisting faculty and staff in conducting research and developing proposals. 
        • Automating literature reviews and data analysis. 
        • Facilitating the dissemination of research findings. 

Idea Generation and Concept Designing: 

        • Brainstorming new initiatives and programs. 
        • Developing innovative solutions to administrative challenges. 
        • Designing new processes and workflows. 
        • Creating new marketing and communication strategies. 

Example Use Cases: 

        • Idea Generation: The Office of Student Affairs could use AI to generate ideas for new student engagement programs, such as themed events, workshops, or clubs. 
        • Concept Designing: The Facilities Management department could use AI to generate concepts for new campus buildings or renovations, such as floor plans, energy-efficient designs, or sustainable building materials. 

II. The AI Landscape: Shaping the Future of Administration

Generative AI presents a transformative opportunity for operational functions at the University of Montevallo. By leveraging AI, we can consider the following.

Increase Efficiency and Productivity: Automating routine tasks frees up staff time for more strategic and impactful work. 

Enhance Decision-Making: AI-powered analytics can provide valuable insights to inform strategic planning and resource allocation. 

Improve Communication: AI can help us personalize communication, reach wider audiences, and build stronger relationships with students, faculty, and the community. 

Foster Innovation: AI can be used to generate new ideas, explore innovative solutions, and improve the overall efficiency and effectiveness of operational processes. 

III. Navigating the AI Frontier: Responsible Implementation Guidelines

The University of Montevallo is committed to the responsible use of AI tools within the operational domain. To ensure alignment with institutional policies and best practices, please adhere to the following guidelines: 

University of Montevallo Operational AI Guidelines: The University of Montevallo has developed interim guidelines specifically for the use of AI tools within operational functions across campus. These guidelines provide guidance on data privacy, security, ethical considerations, and best practices for AI implementation. All staff are encouraged to review and adhere to these guidelines before utilizing AI tools in their operational work. 

Alignment with University IT Guidance: The University’s Technology Advisory Council  (TAC) has published a guiding resource titled “Interim Guidance on Data Uses and Risks of Generative AI,” Located on this webpage. This document outlines University expectations regarding the use of AI platforms from a data security and privacy perspective. 

        • Data Stewardship: Pay particular attention to the guidelines section within this document that specifically addresses data stewardship. This section details crucial institutional regulations and requirements pertaining to the use of data with AI tools. 

AI Prompt Engineering: 

        • Mastering the Art of Prompting: Crafting effective prompts is crucial for successful AI interactions. 
        • Continuous Learning: Experiment with different prompts and refine your approach to achieve the desired outcomes. 
        • Resource Utilization: Utilize online resources and AI tool documentation to learn best practices for prompt engineering. 
        • Tool Collaboration: Leverage AI tools themselves to assist in prompt refinement. For example, you can ask ChatGPT for tips on improving your prompts for other AI tools.
        • Develop a Prompt Library: Over time, you may develop a library of effective prompts for specific tasks and tools, increasing efficiency and consistency. 

Copyright Considerations: 

        • Terms of Service: Carefully review the terms of service for any AI tool you utilize. Understand the limitations and restrictions on the use and distribution of AI-generated content.
        • Copyright Ownership: Be mindful of copyright issues. By default, treat any AI-generated content as not eligible for copyright. 

Meeting and Operational Considerations: 

        • Meeting Notes and Summaries: AI tools can assist with meeting notes and summaries. However, ensure accuracy by reviewing and editing AI-generated summaries, be mindful that meeting notes and summaries may be subject to public records requests, and avoid using AI tools for meetings that discuss confidential or sensitive information. 
        • Transparency and Consent: If using AI tools in meetings, inform attendees beforehand and obtain their consent. 
        • Data Privacy: Be mindful of the potential for AI meeting tools to collect data on attendee behavior. 

Writing and Editing: 

        • Initial Editing: Use AI tools as a first pass at editing, followed by thorough human review and refinement to ensure alignment with University of Montevallo’s brand voice and style guidelines. 
        • Drafting: AI tools can assist in generating early drafts of written content, such as emails, reports, and presentations. However, always review AI-generated content for accuracy, clarity, and appropriateness. 

Image and Media Generation: 

        • Quality Control: AI image generators can create visually appealing images, but they may require significant post-processing and refinement. 
        • Ethical Considerations: Avoid using AI to generate images that are misleading, deceptive, or that could be considered harmful or offensive. 
        • Mascot Usage: The use of generative AI to create depictions of Freddie the Falcon is not permitted. Freddie the Falcon mascot promotional appearances not related to Athletics are managed by University Marketing & Communications. 

By adhering to these guidelines, we can ensure the responsible and ethical use of AI tools within University of Montevallo’s operational functions. 

IV. Key AI Concepts

Large Language Models (LLMs): LLMs are sophisticated algorithms trained on massive datasets of text and code. They can generate human-like text, translate languages, write different kinds of creative content, and answer your questions in an informative way.Examples include GPT-3.5, GPT-4, and Bard.    

Limitations of LLMs: LLMs can sometimes generate inaccurate or misleading information. They may also reflect biases present in their training data. It’s crucial to critically evaluate the output of LLMs and cross-reference information from reliable sources.

V. Disclaimer

AI is a rapidly evolving field. The use of AI tools requires careful consideration and ongoing evaluation. This document provides a starting point for exploring the potential and limitations of AI in operational functions at the University of Montevallo. 


Acknowledgement 

This document was developed by the University of Montevallo, drawing upon insights from existing guidelines at: Michigan State University, University of Alabama Birmingham, Arizona State University, Carnegie Mellon University, and the University of California, Berkeley. This list is not exhaustive, and the University of Montevallo acknowledges the valuable work being done by numerous other institutions in navigating the ethical and responsible use of generative AI. 

Abstract Artificial Intelligence ImageGenerative Artificial Intelligence (AI) language models, including products like ChatGPT, Bard, and Microsoft Co-Pilot, offer the potential to enhance efficiency and productivity within the University of Montevallo. These tools can assist with various administrative tasks such as drafting communications, summarizing meeting notes, and analyzing data. However, the use of generative AI tools requires careful consideration to ensure the security, privacy, and ethical use of University data. This interim guidance outlines key principles and best practices for the responsible use of generative AI within the operational domain at the University of Montevallo. 

I. Information Privacy

Personally Identifiable Information (PII): Do not enter any Personally Identifiable Information (PII) into generative AI tools. This includes, but is not limited to, information covered by the Federal Education Rights and Privacy Act (FERPA), such as:

        • Student IDs
        • Social Security numbers
        • Addresses
        • Phone numbers
        • Email addresses
        • Dates of birth
        • Medical records
        • Academic records
        • Financial aid information

Employee Information: Avoid entering sensitive employee information, such as Social Security numbers, salaries, performance reviews, and medical records (protected by HIPAA), into generative AI tools. Limited use of basic employee information (e.g., job titles, department names) may be permissible for specific administrative tasks, such as generating draft job descriptions. However, this must be carefully considered and may require prior approval from the appropriate University department (e.g., Human Resources).

Confidential University Information: Do not enter any confidential University information into generative AI tools, including:

        • Strategic plans
        • Financial reports
        • Legal documents 
        • Internal communications
        • Proprietary research data
        • Information covered by non-disclosure agreements 

II. Data Security

Data Minimization: Only enter the absolute minimum amount of data necessary to achieve the desired outcome. 

Data Anonymization: Whenever possible, anonymize data before entering it into generative AI tools. For example, remove names and replace them with identifiers. 

Prioritize University-Approved Tools: Prioritize the use of University-approved AI tools and services that have undergone security and privacy assessments. 

Adhere to Data Classification: Ensure that data entered into generative AI tools aligns with the University’s data classification system (Restricted, Confidential, Public) as outlined in Policy 01:009 – Data Governance. 

Comply with Data Storage Policies: Adhere to the data storage guidelines outlined in Policy 01:004 – Data Storage, ensuring that data is stored on secure University systems and not on personal devices unless explicitly permitted and securely encrypted. 

Understand Data Handling Practices: Be aware that data entered into generative AI tools may be used to train the AI model, thus, making that data publically available.

Legal and Regulatory Considerations:

        • FERPA Compliance: Ensure strict adherence to the Family Educational Rights and Privacy Act (FERPA) when handling student data. 
        • HIPAA Compliance: Ensure compliance with the Health Insurance Portability and Accountability Act (HIPAA) when handling any protected health information. 
        • Freedom of Information Act (FOIA): Be mindful of the Freedom of Information Act (FOIA) when generating or using AI-generated content that may be subject to public disclosure requests. 
        • Solomon Amendment: Be aware of the requirements of the Solomon Amendment regarding the release of student recruiting information to military recruiters. 
        • Data Protection Policies: Ensure compliance with all applicable University data protection policies, including those related to data security, privacy, and access. 

III. Ethical Considerations

Bias and Fairness:

        • Be mindful of potential biases that may be present in AI-generated outputs. 
        • Carefully review AI-generated content for any unintended biases, discriminatory language, or inaccuracies. 
        • Ensure that AI-generated communications are inclusive and respectful of all individuals and groups. 

Transparency:

        • Be transparent with colleagues and stakeholders regarding the use of AI tools. 
        • Clearly communicate the limitations and potential risks associated with AI-generated outputs. 

Human Oversight:

        • Always maintain human oversight and review AI-generated outputs to ensure accuracy, quality, and alignment with University values.
        • Do not solely rely on AI-generated outputs for critical decisions or sensitive tasks.

IV. Recommended Practices

Consult University Policies: Review and adhere to all applicable University policies, including: 

AI Prompt Engineering:

        • Mastering the Art of Prompting: Crafting effective prompts is crucial for successful AI interactions. 
        • Continuous Learning: Experiment with different prompts and refine your approach to achieve the desired outcomes. 
        • Resource Utilization: Utilize online resources and AI tool documentation to learn best practices for prompt engineering. 
        • Tool Collaboration: Leverage AI tools themselves to assist in prompt refinement. For example, you can ask ChatGPT for tips on improving your prompts for other AI tools. 
        • Develop a Prompt Library: Over time, you may develop a library of effective prompts for specific tasks and tools, increasing efficiency and consistency. 

Copyright Considerations:

        • Terms of Service: Carefully review the terms of service for any AI tool you utilize. Understand the limitations and restrictions on the use and distribution of AI-generated content. 
        • Copyright Ownership: Be mindful of copyright issues. By default, treat any AI-generated content as not eligible for copyright. 

Meeting and Operational Considerations: 

        • Meeting Notes and Summaries: AI tools can assist with meeting notes and summaries. However, 
          • Ensure accuracy by reviewing and editing AI-generated summaries. 
          • Be mindful that meeting notes and summaries may be subject to public records requests. 
          • Avoid using AI tools for meetings that discuss confidential or sensitive information. 
        • Transparency and Consent: If using AI tools in meetings, inform attendees beforehand and obtain their consent. 
        • Data Privacy: Be mindful of the potential for AI meeting tools to collect data on attendee behavior. 

Writing and Editing: 

        • Initial Editing: Use AI tools as a first pass at editing, followed by thorough human review and refinement to ensure alignment with University of Montevallo’s brand voice and style guidelines. 
        • Drafting: AI tools can assist in generating early drafts of written content, such as emails, reports, and presentations. However, always review AI-generated content for accuracy, clarity, and appropriateness. 

Idea Generation: When using AI tools for idea generation, ensure that any ideas claimed as sole personal intellectual property acknowledge the use of AI tools in their development. 

Research: 

        • Data Use in Research: Researchers must comply with all relevant research ethics guidelines and regulations, including those related to the use of human subjects data. 
        • Data Anonymization: Ensure that any research data entered into generative AI tools is properly anonymized to protect participant privacy and comply with regulations such as FERPA and HIPAA. 
        • Intellectual Property: 
          • Be mindful of intellectual property rights when using AI-generated content in research publications. 
          • Clearly disclose the use of AI tools in any research publications that utilize AI-generated content. 

Website Privacy: Be mindful of the University of Montevallo’s Website Privacy Statement when using generative AI tools to interact with website visitors or process information collected through the website. 

V. Disclaimer

This interim guidance is subject to change as the University of Montevallo continues to evaluate and refine its approach to AI within the operational domain. This guidance focuses on the use of AI within operational functions. For guidance on the use of AI in academic settings, please refer to relevant faculty policies and guidelines. 


Acknowledgement 

This document was developed by the University of Montevallo, drawing upon insights from existing guidelines at: Michigan State University, University of Alabama Birmingham, Arizona State University, Carnegie Mellon University, and the University of California, Berkeley. This list is not exhaustive, and the University of Montevallo acknowledges the valuable work being done by numerous other institutions in navigating the ethical and responsible use of generative AI. 

Other Information & Resources