Setup of Extension Services
This section describes the required setup for various extension services that can be used to activate additional functionality in your Artemis instance.
EduTelligence Suite
EduTelligence is a comprehensive suite of AI-powered microservices designed to enhance Artemis with intelligent features. Some of the AI-powered services that integrate with Artemis are now part of this unified suite.
Compatibility: EduTelligence maintains compatibility with different versions of Artemis. Please refer to the compatibility matrix to ensure you're using compatible versions for optimal integration and functionality.
Repository: https://github.com/ls1intum/edutelligence
Iris & Pyris Setup Guide
Pyris is now part of the EduTelligence suite. Please check the compatibility matrix to ensure you're using compatible versions of Artemis and EduTelligence.
Overview
Iris is an intelligent virtual tutor integrated into Artemis, providing one-on-one programming assistance, course content support, and competency generation for students. Iris relies on Pyris, an intermediary service from the EduTelligence suite that brokers requests to Large Language Models (LLMs) using FastAPI.
This guide consolidates everything you need to configure both Artemis and Pyris so they communicate securely and reliably.
Artemis Configuration
Prerequisites
- Ensure you have a running instance of Artemis.
- Have access to the Artemis deployment configuration (e.g.,
application-artemis.yml). - Decide on a shared secret that will also be configured in Pyris.
Enable the iris Spring profile
--spring.profiles.active=dev,localci,localvc,artemis,scheduling,buildagent,core,local,iris
Configure Pyris API endpoints
The Pyris service is addressed by Artemis via HTTP(s). Extend src/main/resources/config/application-artemis.yml like so:
artemis:
# ...
iris:
url: http://localhost:8000
secret: abcdef12345
The value of secret must match one of the tokens configured under api_keys in your Pyris application.local.yml.
For detailed information on deploying and configuring Pyris itself, continue with the next section.
Pyris Service Setup
Prerequisites
- A server/VM or local machine.
- Python 3.12: Ensure that Python 3.12 is installed.
python --version
(Should be 3.12)
- Poetry: Used to manage Python dependencies and the virtual environment.
- Docker and Docker Compose: Required if you want to run Pyris via containers.
Local Development Setup
1. Clone the EduTelligence Repository
Clone the EduTelligence repository (https://github.com/ls1intum/edutelligence) onto your machine and switch into the iris subdirectory.
git clone https://github.com/ls1intum/edutelligence.git
cd edutelligence/iris
2. Install Dependencies
Pyris uses Poetry for dependency management. Install all required packages (this also creates the virtual environment):
poetry install
Install the repository-wide pre-commit hooks from the EduTelligence root directory with pre-commit install if you plan to contribute changes.
3. Create Configuration Files
- Create an Application Configuration File
Create an application.local.yml file in the iris directory, based on the provided example.
cp application.example.yml application.local.yml
Example application.local.yml:
# Token that Artemis will use to access Pyris
api_keys:
- token: "your-secret-token"
# Weaviate connection
weaviate:
host: "localhost"
port: "8001"
grpc_port: "50051"
env_vars:
Make sure the token you define here matches the secret configured in Artemis.
- Create an LLM Config File
Create an llm_config.local.yml file in the iris directory.
cp llm_config.example.yml llm_config.local.yml
The OpenAI configuration examples are intended solely for development and testing purposes and should not be used in production environments. For production use, we recommend configuring a GDPR-compliant solution.
Example OpenAI Configuration
- id: "oai-gpt-41-mini"
name: "GPT 4.1 Mini"
description: "GPT 4.1 Mini on OpenAI"
type: "openai_chat"
model: "gpt-4.1-mini"
api_key: "<your_openai_api_key>"
tools: []
cost_per_million_input_token: 0.4
cost_per_million_output_token: 1.6
Example Azure OpenAI Configuration
- id: "azure-gpt-4-omni"
name: "GPT 4 Omni"
description: "GPT 4 Omni on Azure"
type: "azure_chat"
endpoint: "<your_azure_model_endpoint>"
api_version: "2024-02-15-preview"
azure_deployment: "gpt4o"
model: "gpt4o"
api_key: "<your_azure_api_key>"
tools: []
cost_per_million_input_token: 0.4
cost_per_million_output_token: 1.6
Explanation of Configuration Parameters
The configuration parameters are used by pipelines in Pyris to select the appropriate model for a given task.
api_key: The API key for the model.description: Additional information about the model.id: Unique identifier for the model across all models.model: The official name of the model as used by the vendor. This value is also used for model selection inside Pyris (e.g.,gpt-4.1orgpt-4.1-mini).name: A human-readable name for the model.type: The model type used to select the appropriate client (e.g.,openai_chat,azure_chat,ollama).endpoint: The URL used to connect to the model (if required by the provider).api_version: The API version to use with the model (provider specific).azure_deployment: The deployment name of the model on Azure.tools: Tools supported by the model.cost_per_million_input_token/cost_per_million_output_token: Pricing information used for routing when multiple models satisfy the same requirements.
Most existing pipelines currently require the full GPT-4.1 model family to be configured. Monitor Pyris logs for warnings about missing models so you can update your llm_config.local.yml accordingly.
4. Run the Server
Start the Pyris server:
APPLICATION_YML_PATH=./application.local.yml \
LLM_CONFIG_PATH=./llm_config.local.yml \
uvicorn app.main:app --reload
5. Access API Documentation
Open your browser and navigate to http://localhost:8000/docs to access the interactive API documentation.
Using Docker
Prerequisites
- Ensure Docker and Docker Compose are installed on your machine.
- Clone the EduTelligence repository to your local machine.
- Create the necessary configuration files as described in the previous section.
Docker Compose Files
- Development:
docker/pyris-dev.yml - Production with Nginx:
docker/pyris-production.yml - Production without Nginx:
docker/pyris-production-internal.yml
Setup Instructions
1. Running the Containers
You can run Pyris in different environments: development or production.
Development Environment
- Start the Containers
docker-compose -f docker/pyris-dev.yml up --build
-
Builds the Pyris application.
-
Starts Pyris and Weaviate in development mode.
-
Mounts local configuration files for easy modification.
-
Access the Application
-
Application URL:
http://localhost:8000 -
API Docs:
http://localhost:8000/docs
Production Environment
Option 1: With Nginx
- Prepare SSL Certificates
- Place your SSL certificate (
fullchain.pem) and private key (priv_key.pem) in the specified paths or update the paths in the Docker Compose file.
- Start the Containers
docker-compose -f docker/pyris-production.yml up -d
- Pulls the latest Pyris image.
- Starts Pyris, Weaviate, and Nginx.
- Nginx handles SSL termination and reverse proxying.
- Access the Application
- Application URL:
https://your-domain.com
Option 2: Without Nginx
- Start the Containers
docker-compose -f docker/pyris-production-internal.yml up -d
- Pulls the latest Pyris image.
- Starts Pyris and Weaviate.
- Access the Application
- Application URL:
http://localhost:8000
2. Managing the Containers
- Stop the Containers
docker-compose -f <compose-file> down
Replace <compose-file> with the appropriate Docker Compose file.
- View Logs
docker-compose -f <compose-file> logs -f <service-name>
Example:
docker-compose -f docker/pyris-dev.yml logs -f pyris-app
- Rebuild Containers
If you've made changes to the code or configurations:
docker-compose -f <compose-file> up --build
3. Customizing Configuration
- Environment Variables
You can customize settings using environment variables:
-
PYRIS_DOCKER_TAG: Specifies the Pyris Docker image tag. -
PYRIS_APPLICATION_YML_FILE: Path to yourapplication.ymlfile. -
PYRIS_LLM_CONFIG_YML_FILE: Path to yourllm_config.ymlfile. -
PYRIS_PORT: Host port for Pyris application (default is8000). -
WEAVIATE_PORT: Host port for Weaviate REST API (default is8001). -
WEAVIATE_GRPC_PORT: Host port for Weaviate gRPC interface (default is50051). -
Configuration Files
Modify configuration files as needed:
- Pyris Configuration: Update
application.ymlandllm_config.yml. - Weaviate Configuration: Adjust settings in
weaviate.yml. - Nginx Configuration: Modify Nginx settings in
nginx.ymland related config files.
Troubleshooting
- Port Conflicts
If you encounter port conflicts, change the host ports using environment variables:
export PYRIS_PORT=8080
- Permission Issues
Ensure you have the necessary permissions for files and directories, especially for SSL certificates.
- Docker Resources
If services fail to start, ensure Docker has sufficient resources allocated.
Conclusion
With Artemis configured to communicate with Pyris and Pyris deployed locally or via Docker, Iris is ready to support your courses.
Athena Service
The semi-automatic text assessment relies on the Athena service, which is part of the EduTelligence suite. To enable automatic text assessments, special configuration is required:
Athena is now part of the EduTelligence suite. Please check the compatibility matrix to ensure you're using compatible versions of Artemis and EduTelligence.
Enable the athena Spring profile
--spring.profiles.active=dev,localci,localvc,artemis,scheduling,buildagent,core,local,athena
Configure API Endpoints
The Athena service is running on a dedicated machine and is addressed via HTTP. We need to extend the configuration in the file src/main/resources/config/application-artemis.yml like so:
artemis:
# ...
athena:
url: http://localhost:5100
secret: abcdef12345
modules:
# See https://github.com/ls1intum/edutelligence/tree/main/athena for a list of available modules
text: module_text_cofee
programming: module_programming_themisml
The secret can be any string. For more detailed instructions on how to set it up in Athena, refer to the Athena documentation.
Nebula Setup Guide
Nebula is part of the EduTelligence suite. For a successful integration, ensure that the versions you deploy remain compatible with your Artemis release by consulting the EduTelligence compatibility matrix.
Overview
Nebula provides AI-powered processing pipelines for lecture videos. Artemis currently integrates with the Transcriber service to automatically generate lecture transcripts (including slide-number alignment) for attachment video units.
Artemis Configuration
Prerequisites
- A running Artemis instance with the
schedulingSpring profile enabled. The scheduler polls Nebula every 30 seconds for finished jobs. - A shared secret that will be used both in Artemis and in the Nebula gateway.
- A running instance of GoCast (TUM-Live) where the lecture recordings are hosted in a public course.
Enable the Nebula module
Set the Nebula toggle, URL, and shared secret in your Artemis configuration. The secret you specify here must match the API key that Nebula expects in the Authorization header.
artemis:
nebula:
enabled: true
url: https://nebula.example.com
secret-token: your-shared-secret
tum-live:
# Ensure this URL points to your GoCast (TUM-Live) instance. Add the /api/v2 suffix.
api-base-url: https://api.tum.live.example.com/api/v2
Artemis uses server.url internally when contacting Nebula. Make sure the property reflects the external URL of your Artemis instance.
Starting transcriptions inside Artemis
After Nebula is configured, instructors can launch transcriptions from the lecture unit editor. See the lecture management guide for the detailed, instructor-facing workflow.
Nebula Service Deployment
Prerequisites
- Python 3.12
- Poetry
- FFmpeg available in
PATH - Docker and Docker Compose if you plan to run the API gateway or production deployment
Local development workflow
The EduTelligence repository provides step-by-step instructions for running Nebula locally. Follow the Quick Start for Developers section and start only the Transcriber Service (skip the FAQ service). When configuring the Nginx gateway, make sure the map block includes the same shared secret you configured in Artemis. Once running, the relevant health checks are at:
curl http://localhost:3007/transcribe/health
curl http://localhost:3007/health
Production deployment
For production deployments, follow the Production Deployment guide in the Nebula README and provision only the transcriber container plus the nginx gateway. Use distinct configuration files (.env, llm_config.production.yml, nginx.production.conf), keep Whisper / GPT-4o credentials secure, and omit the FAQ service unless you plan to experiment with rewriting features.
After the stack is up, verify that the gateway responds:
curl https://nebula.example.com/health
curl -H "Authorization: your-shared-secret" https://nebula.example.com/transcribe/health
Connecting Artemis and Nebula
When both sides are configured:
- Artemis sends transcription jobs to
POST /transcribe/startwith the shared secret. - Nebula processes the job asynchronously and exposes status updates via
GET /transcribe/status/{jobId}. - The Artemis scheduler polls Nebula every 30 seconds and persists completed transcripts on the associated lecture unit.
If Artemis reports unauthorized or internal errors when starting a job, double-check that:
artemis.nebula.enabled=true- The
artemis.nebula.urlmatches the Nginx gateway URL (including scheme) - The Artemis secret token equals the key configured in Nginx (
map $http_authorization $api_key_valid) - Nebula can reach its configured Whisper and GPT-4o endpoints (inspect the transcriber logs for HTTP 401/429/500 responses)
With those pieces in place, instructors can automatically transcribe lecture recordings stored on TUM-Live without manual copy/paste workflows.
Other Extension Services
Text Assessment Analytics Service
Text Assessment Analytics is an internal analytics service used to gather data regarding the features of the text assessment process. Certain assessment events are tracked:
- Adding new feedback on a manually selected block
- Adding new feedback on an automatically selected block
- Deleting a feedback
- Clicking to resolve feedback conflicts
- Clicking to view origin submission of automatically generated feedback
- Hovering over the text assessment feedback impact warning
- Editing/Discarding an automatically generated feedback
- Clicking the Submit button when assessing a text submission
- Clicking the Assess Next button when assessing a text submission
These events are tracked by attaching a POST call to the respective DOM elements on the client side. The POST call accesses the TextAssessmentEventResource which then adds the events in its respective table. This feature is disabled by default. We can enable it by modifying the configuration in the file: src/main/resources/config/application-artemis.yml like so:
info:
textAssessmentAnalyticsEnabled: true
Apollon Service
Apollon Converter is needed to convert models from their JSON representation to PDF. Special configuration is required:
Enable the apollon Spring profile
--spring.profiles.active=dev,localci,localvc,artemis,scheduling,buildagent,core,local,apollon
Configure API Endpoints
The Apollon conversion service is running on a dedicated machine and is addressed via HTTP. We need to extend the configuration in the file src/main/resources/config/application-artemis.yml like so:
apollon:
conversion-service-url: http://localhost:8080
Hermes Service
Push notifications for the mobile Android and iOS clients rely on the Hermes service. To enable push notifications the Hermes service needs to be started separately and the configuration of the Artemis instance must be extended.
Configure and start Hermes
To run Hermes, you need to clone the repository and replace the placeholders within the docker-compose file.
The following environment variables need to be updated for push notifications to Apple devices:
APNS_CERTIFICATE_PATH: String - Path to the APNs certificate .p12 file as described hereAPNS_CERTIFICATE_PWD: String - The APNS certificate passwordAPNS_PROD_ENVIRONMENT: Bool - True if it should use the Production APNS Server (Default false)
Furthermore, the <APNS_Key>.p12 needs to be mounted into the Docker under the above specified path.
To run the services for Android support the following environment variable is required:
GOOGLE_APPLICATION_CREDENTIALS: String - Path to the firebase.json
Furthermore, the Firebase.json needs to be mounted into the Docker under the above specified path.
To run both APNS and Firebase, configure the environment variables for both.
To start Hermes, run the docker compose up command in the folder where the docker-compose file is located.
Artemis Configuration
The Hermes service is running on a dedicated machine and is addressed via HTTPS. We need to extend the Artemis configuration in the file src/main/resources/config/application-artemis.yml like:
artemis:
# ...
push-notification-relay: <url>
Hyperion Service
This documentation has been migrated to the new documentation system. Please refer to the Hyperion Setup Guide.
Aeolus Service
Aeolus is a service that provides a REST API for the Artemis platform to generate custom build plans for programming exercises. It is designed to be used in combination with the Artemis platform to provide build plans in multiple CI systems, currently Jenkins and LocalCI.
This section outlines how to set up Aeolus in your own Artemis instance.
Prerequisites
- Ensure you have a running instance of Artemis.
- Set up a running instance of Aeolus. See the Aeolus documentation for more information.
Enable the aeolus Spring profile
--spring.profiles.active=dev,localci,localvc,artemis,scheduling,buildagent,core,local,aeolus
Configure the Aeolus Endpoint
The Aeolus service can run on a dedicated machine since Artemis accesses it via a REST API call. We need to extend the configuration in the file src/main/resources/config/application-artemis.yml to include the Aeolus endpoint. How to do this is described in the configuration documentation for Aeolus.
Setup Guide for Exchange with the Sharing Platform
Background
Sharing Platform is an open platform for sharing teaching materials related to programming. It is operated by the University of Innsbruck. While primarily designed as an open exchange platform, it also provides features such as private group exchanges and the ability to restrict public access to certain content, such as the solution repository of an Artemis exercise.
For more details, visit help menu of the sharing platform.
To facilitate the exchange of programming exercises among instructors, the sharing platform offers a connector to Artemis, enabling any Artemis instance to integrate with the platform for seamless sharing.
The Sharing Platform is open source. The source code can be found at https://sharing-codeability.uibk.ac.at/development/sharing/codeability-sharing-platform.
Prerequisites
To connect to the sharing platform, you need an API key. To request one, contact the platform maintainers at artemis-support-informatik@uibk.ac.at and provide the URL of your active Artemis instance.
Important: Sharing only works if your Artemis instance is accessible on the internet. If making your instance publicly available is not an option, the maintainers can provide a list of required Artemis URLs that must be accessible to the sharing platform.
Configuration
Once you receive your API key, you should add it to the configuration file application-core.yml or your .env file:
Option 1: application-artemis.yml
artemis:
sharing:
enabled: true
# Shared common secret
apikey: <your API Key>
serverurl: https://search.sharing-codeability.uibk.ac.at/
actionname: Export to Artemis@myUniversity
Option 2: .env file for Docker initialization
ARTEMIS_SHARING_ENABLED=true
ARTEMIS_SHARING_SERVERURL=https://search.sharing-codeability.uibk.ac.at/
ARTEMIS_SHARING_APIKEY=<Enter your API Key here>
ARTEMIS_SHARING_ACTIONNAME=Export to Artemis@<Enter an ID here>
Once configured, restart your Artemis instance.
Instructor Access Requirements
For instructors to exchange programming exercises, they need an account on the sharing platform. They can register using one of the following methods:
- EduID Authentication: The simplest way is through EduID (Austria) or EduID (Germany). Forward the necessary connection details to the sharing platform maintainers.
- GitLab-Based Registration: If EduID is not an option, users can register via the sharing platform's GitLab instance. However, for security reasons, self-registration is restricted to certain email domains. To enable access, forward the desired domains to the maintainers for approval.
Troubleshooting
To assist in troubleshooting, the sharing profile includes an additional health indicator, accessible via the Administration -> Health menu.
Under Details, you will typically find the following entries:
- The first entry is an initialization request sent after startup.
- The second entry reflects the subsequent receipt of the connector configuration from the sharing platform.
- Additional entries represent regular configuration polling requests from the sharing platform.
The Details log stores the last 10 entries.
If the health status is not up, check the error message in the details. If the issue is unclear, feel free to contact the sharing platform maintainers for support.
Conclusion
Once everything is set up correctly, you should see the
Similarly, the
Before testing the import and export functionality, refer to the user documentation for further details.

