Class Scheduling AI Assistant

The lawful, ethical, and responsible use of artificial intelligence (AI) is a key priority for Anthology; therefore, we have developed and implemented a Trustworthy AI program. You can find information on our program and general approach to Trustworthy AI in our Trust Center here: https://www.anthology.com/trust-center/trustworthy-ai-approach.

As part of our Trustworthy AI principles, we commit to transparency, explainability, and accountability. This page is intended to provide the necessary transparency and explainability to help our clients with their implementation of the Class Scheduling AI Assistant. We recommend that administrators carefully review this page and ensure that instructors are aware of the considerations and recommendations below before you activate the Class Scheduling AI Assistant functionalities for your institution.

How to contact us:

  • For questions or feedback on our general approach to Trustworthy AI or how we can make this page more helpful for our clients, please email us at trustworthy-ai@anthology.com.

  • For questions or feedback about the functionality or output of the Class Scheduling AI Assistant, please submit a client support ticket at https://support.anthology.com.

AI-facilitated functionalities

The Scheduling AI Assistant aids instructors with information on class sections, additional section needs, and faculty hire needs. It is intended to help instructors to streamline the registration process by forecasting the section needs. Anthology has partnered with Microsoft to provide this functionality, not least because Microsoft has a long-standing commitment to the ethical use of AI.

The Class Scheduling AI Assistant provides the following generative AI-facilitated pre-defined prompts in Anthology Student:

Question Answer
Which courses have total seats needed less than 10 for the term? Provides a list of courses that have seats needed less than 10 in the term.
Which courses have total seats needed exceeding seats available by more than 20 seats for the term? Provides a list of courses that have seats needed more than the available capacity in the term.
Where is it recommended that I add sections for the term? Provides a list of courses that need more sections to fulfill the demand in the term.

Users are free to select these prompts or to write their query that will analyze and retrieve data from the class seat projection results.

Note: The Class Scheduling Assistant will answer only those queries that are relevant to the class scheduling projection data. It will not answer any jailbreak scenarios, for example, Who is the president of the United States?

Key Facts

Question Answer
What functionalities use AI systems? AI Scheduling Assistant functionalities described above (class section needs, faculty hires, etc. for class scheduling in Anthology Student).
Is this a third-party-supported AI system? Yes – The Class Scheduling AI Assistant is powered by Microsoft’s Azure OpenAI Service.
How does the AI system work? The Scheduling AI Assistant leverages Microsoft’s Azure OpenAI Service to auto-generate outputs. This is achieved by using class sections and registration information for current and future terms, analyzing the history of the past two years (e.g., course code, course name, total sections scheduled, total capacity, etc.), and prompting the Azure OpenAI Service accordingly via the Azure OpenAI Service API. Instructors can include additional prompt context for more tailored output generation. The Azure OpenAI Service generates the output based on the prompt and the content is surfaced in the user interface.
Where is the AI system hosted? Anthology currently uses multiple global Azure OpenAI Service instances. The primary instance is hosted in the United States, but at times we may utilize resources in other locations such as Canada, the United Kingdom, or France to provide the best availability option for the Azure OpenAI Service for our clients.
Is this an opt-in functionality? Yes. Administrators need to activate the Class Scheduling AI Assistant in the Anthology Student. Settings for the Class Scheduling AI Assistant. Administrators can activate or deactivate AI functionality. Administrators also need to assign Class Scheduling AI Assistant access privileges to user roles as necessary, such as the Instructor role.
How is the AI system trained? Anthology is not involved in the training of the large language models that power the Class Scheduling AI Assistant functionalities. These models are trained by Microsoft as part of the Azure OpenAI Service that powers the Class Scheduling AI Assistant functionalities. Microsoft provides information about how the large language models are trained in the Introduction section of Microsoft’s Transparency Note and the links provided within it. Anthology does not further fine-tune the Azure OpenAI Service using our own or our clients’ data.
Is client data used for (re)training the AI system? No. Microsoft contractually commits in its Azure OpenAI terms with Anthology to not use any input into, or output of, the Azure OpenAI for the (re)training of the large language model. The same commitment is made in the Microsoft documentation on Data, privacy, and security for Azure OpenAI Service.
How does Anthology use personal information about the provision of the AI system? Anthology only uses the information collected in connection with the Class Scheduling AI Assistant to provide, maintain, and support the Class Scheduling AI Assistant and where we have the contractual permission to do so in accordance with applicable law. You can find more information about Anthology’s approach to data privacy in our Trust Center.

In the case of a third-party supported AI system, how will the third-party use personal information?

Only limited course and class section information is provided to Microsoft for the Azure OpenAI Service. This should generally not include personal information (except in cases where personal information is included as instructor's name).

Microsoft does not use any Anthology data nor Anthology client data it has access to (as part of the Azure OpenAI Service) to improve the OpenAI models, to improve its own or third-party products services, nor to automatically improve the Azure OpenAI models for Anthology’s use in Anthology’s resource (the models are stateless). Microsoft reviews prompts and output for its content filtering to prevent abuse and harmful content generation.

Considerations and recommendations for institutions

Intended use cases

The Scheduling Assistant is only intended to support and respond to the queries that are in the scope of the projected data. The functionality helps registrar or instructors with information that is critical to institutional analysis and student success.

Out of scope use cases

Scheduling Assistant is powered by Microsoft’s Azure OpenAI Service which has a very broad range of uses cases. Scheduling Assistant is trained to answer the queries within the scope of projected data and it will not repond to any queries that are out of scope.

In particular, the points below should be followed when prompting:

  • Only use prompts that are intended to solicit more relevant output from the Class Scheduling AI Assistant (e.g., queries relevant to projected data).

  • Do not use prompts to solicit output beyond the intended functionality. For instance, you should not use the prompt to request sources or references for the output.

  • Suggested output for sensitive topics may be limited. Azure OpenAI Service has been trained and implemented in a manner to minimize illegal and harmful content. This includes a content filtering functionality. This could result in limited output or error messages when the Scheduling Assistant is queried for sensitive topics (e.g., self-harm, violence, hate, sex).

Trustworthy AI principles in practice

Anthology and Microsoft believe the lawful, ethical, and responsible use of AI is a key priority. This section explains how Anthology and Microsoft have worked to address the applicable risk to the legal, ethical, and responsible use of AI and implement the Anthology Trustworthy AI principles. It also suggests steps our clients can consider when undertaking their own AI and legal reviews of ethical AI risks of their implementation.

Transparency and Explainability

Administrators can enable AI functionality in Anthology Student.

In the user interface for instructors, the Class Scheduling AI Assistant analyzes and provides results from the projected data. Instructors are also requested to review the results before use. ·

In addition to the information provided on how the Class Scheduling AI Assistant and the Azure OpenAI Service models work, Microsoft provides additional information about the Azure OpenAI Service in its Transparency Note.

We encourage clients to be transparent about the use of AI within the Scheduling Assistant and provide their instructors and other stakeholders as appropriate with the relevant information from this document and the documentation linked herein.

Reliability and accuracy

We make it clear in the Anthology Student that this is an AI-facilitated functionality that may produce inaccurate or undesired output and that such output should always be reviewed.

In the user interface, instructors are requested to review the output for accuracy.

As detailed in the Limitations section of the Azure OpenAI Service Transparency Note, there is a risk of inaccurate output. While the specific nature of the Scheduling Assistant and our implementation is intended to minimize inaccuracy, it is our client’s responsibility to review the output for accuracy, bias, and other potential issues.

As mentioned above, clients should not use the prompt to solicit output beyond the intended use cases, particularly as this could result in inaccurate output (e.g., where references or sources are requested).

As part of their communication regarding the Scheduling Assistant, institutes should make their instructors aware of this potential limitation.

Institutes can report any inaccurate output to us using the channels listed in the introduction.

Fairness

Large language models inherently present risks relating to stereotyping, over/under-representation and other forms of harmful bias. Microsoft describes these risks in its Limitations section of the Azure OpenAI Service Transparency Note.

Given these risks, we have carefully chosen the Scheduling Assistant functionalities to avoid use cases that may be more prone to harmful bias or where the impact of such bias could be more significant.

Nonetheless, it cannot be excluded that some of the output may be impacted by harmful bias. As mentioned above under ‘Accuracy’, instructors are requested to review output, which can help to reduce any harmful bias.

As part of their communication regarding the Scheduling Assistant, institutes should make their instructors aware of this potential limitation.

Clients can report any potentially harmful bias to us using the contact channels listed in the introduction.

Privacy and Security

As described in the ‘Key facts’ section above, only limited personal information is used for the Scheduling Assistant and accessible to Microsoft. The section also describes our and Microsoft’s commitment regarding the use of any personal information. Given the nature of the Scheduling Assistant, personal information in the generated output is also expected to be limited.

Our Anthology Student product is ISO 27001/27017/27018 certified and is currently working towards certification against ISO 27701. These certifications will include the Scheduling Assistant - related personal information managed by Anthology. You can find more information about Anthology’s approach to data privacy and security in our Trust Center.

Microsoft describes its data privacy and security practices and commitments in the documentation on Data, privacy, and security for Azure OpenAI Service.

Regardless of Anthology’s and Microsoft’s commitment regarding data privacy and not using input to (re)train the models, institutes may want to advise their instructors not to include any personal information or other confidential information in the prompts.

Safety

Large language models inherently present a risk of outputs that may be inappropriate, offensive, or otherwise unsafe. Microsoft describes these risks in its Limitations section of the Azure OpenAI Service Transparency Note.

Given these risks, we have carefully chosen the Class Scheduling AI Assistant functionalities to avoid use cases that may be more prone to unsafe outputs or where the impact of such output could be more significant.

Nonetheless, it cannot be excluded that some of the output may be unsafe. As mentioned above under ‘Accuracy’, instructors are requested to review output, which can further help reduce the risk of unsafe output.

As part of their communication regarding the Scheduling Assistant, institutes should make their instructors aware of this potential limitation.

Institutes should report any potentially unsafe output to us using the channels listed in the introduction.

Humans in control

To minimize the risk related to the use of generative AI for our clients and their users, we intentionally put institutes in control of the Scheduling Assistant’s functionalities. The Scheduling Assistant is therefore an opt-in feature. Administrators must activate the scheduling and can then activate each functionality separately. They can also deactivate the Scheduling Assistant.

Additionally, instructors are in control of the output. They are requested to review text output.

The Scheduling Assistant does not include any automated decision-making that could have a legal or otherwise significant effect on learners or other individuals.

We encourage institutes to carefully review this document including the information links provided herein to ensure they understand the capabilities and limitations of the Scheduling Assistant and the underlying Azure OpenAI Service before they activate the Scheduling Assistant in the production environment.

Value alignment

Large language models inherently have risks regarding output that is biased, inappropriate, or otherwise not aligned with Anthology’s values or the values of our clients. Microsoft describes these risks in its Limitations section of the Azure OpenAI Service Transparency Note.

Additionally, large language models (like every technology that serves broad purposes), present the risk that they can generally be misused for use cases that do not align with the values of Anthology, our clients or their end users, and those of society more broadly (e.g., for criminal activities, to create harmful or otherwise inappropriate output).

Given these risks, we have carefully designed and implemented our Scheduling Assistant functionalities in a manner to minimize the risk of misaligned output. We have also intentionally omitted potentially high-stakes functionalities. ·

Microsoft also reviews prompts and output as part of its content filtering functionality to prevent abuse and harmful content generation.

Intellectual property

Large language models inherently present risks relating to potential infringement of intellectual property rights. Most intellectual property laws around the globe have not fully anticipated nor adapted to the emergence of large language models and the complexity of the issues that arise through their use. As a result, there is currently no clear legal framework or guidance that addresses the intellectual property issues and risks that arise from the use of these models.

Ultimately, it is our client’s responsibility to review the output generated by the Scheduling Assistant for any potential intellectual property right infringement.

Accountability

Anthology has a Trustworthy AI program designed to ensure the legal, ethical, and responsible use of AI. Clear internal accountability and the systematic ethical AI review or functionalities such as those provided by the Scheduling Assistant are key pillars of the program.

To deliver the Scheduling Assistant, we partnered with Microsoft to leverage the Azure OpenAI Service which powers the Scheduling Assistant. Microsoft had a long-standing commitment to the ethical use of AI.

Clients should consider implementing internal policies, procedures, and review of third-party AI applications to ensure their own legal, ethical, and responsible use of AI. This information is provided to support our clients’ review of the Scheduling Assistant.

Further information

Anthology’s Trustworthy AI approach (https://www.anthology.com/trust-center/trustworthy-ai-approach)

Microsoft’s Responsible AI page (https://www.microsoft.com/en-us/ai/responsible-ai)

Microsoft’s Transparency Note for Azure OpenAI Service (https://learn.microsoft.com/en-us/legal/cognitive-services/openai/transparency-note?tabs=text)

Microsoft’s page on Data, privacy, and security for Azure OpenAI Service (https://learn.microsoft.com/en-us/legal/cognitive-services/openai/data-privacy?tabs=azure-portal)