Setup of Extension Services
This section describes the required setup for various extension services that can be used to activate additional functionality in your Artemis instance.
EduTelligence Suite
EduTelligence is a comprehensive suite of AI-powered microservices designed to enhance Artemis with intelligent features. Some of the AI-powered services that integrate with Artemis are now part of this unified suite.
Compatibility: EduTelligence maintains compatibility with different versions of Artemis. Please refer to the compatibility matrix to ensure you're using compatible versions for optimal integration and functionality.
Repository: https://github.com/ls1intum/edutelligence
Iris & Pyris Setup Guide
Pyris is part of the EduTelligence suite. Always check the compatibility matrix to ensure your Artemis version matches the EduTelligence version you deploy.
Overview
Iris is an intelligent virtual tutor integrated into Artemis, providing one-on-one programming assistance, course content support, and competency generation for students. It relies on Pyris, a FastAPI service from the EduTelligence suite that brokers requests between Artemis and various Large Language Models (LLMs).
This guide covers everything needed to get Iris running: configuring Artemis, deploying Pyris, and connecting both.
Step 1: Configure Artemis
1.1 Enable Iris
In your application-artemis.yml, enable Iris and point it at your Pyris instance:
artemis:
iris:
enabled: true
url: https://pyris.your-domain.com # Or http://localhost:8000 for local development
secret-token: your-shared-secret # Must match the token configured in Pyris
The secret-token value must exactly match one of the tokens under api_keys in the Pyris application.yml. If they don't match, Artemis will receive 401 errors when calling Pyris.
1.2 Optional: Configure rate limiting
artemis:
iris:
ratelimit:
default-limit: 100 # Max requests per user (-1 for unlimited)
default-timeframe-hours: 3 # Time window for the limit
Step 2: Deploy Pyris
Choose one of the deployment options below depending on your environment.
Option A: Docker (Recommended for Production)
This is the recommended approach for production deployments. All Docker Compose files are located in the iris/docker/ directory of the EduTelligence repository.
1. Clone the repository
git clone https://github.com/ls1intum/edutelligence.git
cd edutelligence/iris
2. Create configuration files
Create two configuration files on your server. You can use the provided examples as a starting point:
cp application.example.yml application.yml
cp llm_config.example.yml llm_config.yml
Edit application.yml:
api_keys:
- token: "your-shared-secret" # Must match artemis.iris.secret-token
weaviate:
host: "weaviate" # Docker service name (not localhost)
port: "8001"
grpc_port: "50051"
env_vars:
When running inside Docker, set weaviate.host to "weaviate" (the Docker service name), not "localhost". The containers communicate over an internal Docker network.
Edit llm_config.yml with your model configurations. Pyris needs three types of models:
- Chat models (required) — powers all Iris conversations
- Embedding models (required) — for RAG-based retrieval of course content
- Reranker models (optional) — improves retrieval quality
The examples below use Azure OpenAI, which is recommended for production (GDPR-compliant). For local development, you can also use direct OpenAI entries — see llm_config.example.yml for examples.
Azure OpenAI chat model example:
- id: "azure-gpt-5-mini"
name: "GPT 5 Mini"
description: "GPT 5 Mini on Azure"
type: "azure_chat"
endpoint: "<your_azure_endpoint>"
api_version: "2025-04-01-preview"
azure_deployment: "gpt-5-mini"
model: "gpt-5-mini"
api_key: "<your_azure_api_key>"
cost_per_million_input_token: 0.4
cost_per_million_output_token: 1.6
Azure embedding model example:
- id: "azure-embedding-large"
name: "Embedding Large"
description: "Embedding Large 8k Azure"
type: "azure_embedding"
endpoint: "<your_azure_endpoint>"
api_version: "2023-05-15"
azure_deployment: "te-3-large"
model: "text-embedding-3-large"
api_key: "<your_azure_api_key>"
cost_per_million_input_token: 0.13
See llm_config.example.yml for the complete list of model types, including OpenAI chat models, OpenAI embeddings, and Cohere reranker.
Configuration parameter reference:
id: Unique identifier across all configured models.model: Official model name as used by the vendor (e.g.gpt-5-mini,text-embedding-3-large). Used internally by Pyris for model selection.type: Selects the appropriate client. Supported values:openai_chat,azure_chat,ollama,openai_embedding,azure_embedding,cohere_azure.api_key: API key for the model provider.endpoint: Provider URL (required for Azure and Cohere).api_version: API version (Azure only).azure_deployment: Deployment name (Azure only).tools: Tools supported by the model (usually[]).cost_per_million_input_token/cost_per_million_output_token: Pricing info used for model routing decisions.
Most Iris pipelines require the GPT-5 model family (gpt-5.2, gpt-5-mini, gpt-5-nano) plus at least one embedding model. Monitor Pyris logs for warnings about missing models.
3. Choose a Docker Compose profile and start
| File | Use case |
|---|---|
docker/pyris-production.yml | Production with Nginx (SSL termination, reverse proxy) |
docker/pyris-production-internal.yml | Production without Nginx (e.g. behind an existing reverse proxy) |
docker/pyris-dev.yml | Local development (builds from source) |
Production with Nginx (handles SSL termination):
PYRIS_DOCKER_TAG=latest \
PYRIS_APPLICATION_YML_FILE=$(pwd)/application.yml \
PYRIS_LLM_CONFIG_YML_FILE=$(pwd)/llm_config.yml \
NGINX_PROXY_SSL_CERTIFICATE_PATH=/path/to/fullchain.pem \
NGINX_PROXY_SSL_CERTIFICATE_KEY_PATH=/path/to/priv_key.pem \
docker compose -f docker/pyris-production.yml up -d
Production without Nginx (direct access on configurable port):
PYRIS_DOCKER_TAG=latest \
PYRIS_APPLICATION_YML_FILE=$(pwd)/application.yml \
PYRIS_LLM_CONFIG_YML_FILE=$(pwd)/llm_config.yml \
PYRIS_PORT=8000 \
docker compose -f docker/pyris-production-internal.yml up -d
Instead of passing environment variables inline, you can create a docker.env file and use --env-file docker.env with docker compose. This is the recommended approach for production servers.
Environment variable reference:
| Variable | Default | Description |
|---|---|---|
PYRIS_DOCKER_TAG | latest | Docker image tag (e.g. latest, pr-123, a branch name) |
PYRIS_APPLICATION_YML_FILE | — | Absolute path to your application.yml |
PYRIS_LLM_CONFIG_YML_FILE | — | Absolute path to your llm_config.yml |
PYRIS_PORT | 8000 | Host port for Pyris (production-internal only) |
WEAVIATE_PORT | 8001 | Host port for Weaviate REST API |
WEAVIATE_GRPC_PORT | 50051 | Host port for Weaviate gRPC |
WEAVIATE_VOLUME_MOUNT | .docker-data/weaviate-data | Host path for Weaviate data persistence |
NGINX_PROXY_SSL_CERTIFICATE_PATH | — | Path to SSL certificate (production with Nginx only) |
NGINX_PROXY_SSL_CERTIFICATE_KEY_PATH | — | Path to SSL private key (production with Nginx only) |
4. Verify the deployment
# Check Pyris health (with Nginx profile)
curl https://pyris.your-domain.com/api/v1/health
# Check Pyris health (production-internal profile)
curl http://localhost:${PYRIS_PORT:-8000}/api/v1/health
# Check Weaviate health (from the server, if ports are not exposed externally)
curl http://localhost:${WEAVIATE_PORT:-8001}/v1/.well-known/ready
Managing containers:
# View logs
docker compose -f docker/pyris-production.yml logs -f pyris-app
# Stop
docker compose -f docker/pyris-production.yml down
# Pull new image and restart
PYRIS_DOCKER_TAG=latest docker compose -f docker/pyris-production.yml up -d --pull always
Option B: Local Development Setup (without Docker)
Use this approach when developing Pyris itself or when you want to run it directly on your machine.
Prerequisites: Python 3.12, Poetry, Docker (for Weaviate only).
1. Clone and install
git clone https://github.com/ls1intum/edutelligence.git
cd edutelligence/iris
poetry install
2. Start Weaviate
Pyris requires a Weaviate vector database for RAG. Start it using the provided Docker Compose file:
docker compose -f docker/weaviate.yml up -d
Verify: curl http://localhost:8001/v1/.well-known/ready
3. Create configuration files
cp application.example.yml application.local.yml
cp llm_config.example.yml llm_config.local.yml
Edit application.local.yml:
api_keys:
- token: "secret" # Must match artemis.iris.secret-token
weaviate:
host: "localhost" # localhost for non-Docker setup
port: "8001"
grpc_port: "50051"
env_vars:
Edit llm_config.local.yml with your API keys. See the model configuration examples above or refer to llm_config.example.yml.
4. Run the server
APPLICATION_YML_PATH=./application.local.yml \
LLM_CONFIG_PATH=./llm_config.local.yml \
poetry run uvicorn iris.main:app --reload
The server starts at http://localhost:8000. API docs are at http://localhost:8000/docs.
Step 3: Verify End-to-End
Once both Artemis and Pyris are running:
- Open Artemis and navigate to a programming exercise.
- Open the Iris chat panel.
- Send a test message — Iris should respond via Pyris.
If Iris does not respond, check:
- Artemis logs for HTTP errors when contacting Pyris (401 = secret mismatch, connection refused = wrong URL).
- Pyris logs for model errors (
No model found for ...= missing model inllm_config.yml). - That the
secret-tokenin Artemis matches theapi_keystoken in Pyris exactly. - That Weaviate is running and reachable from Pyris.
Troubleshooting
- Port conflicts: Change host ports via environment variables (
PYRIS_PORT,WEAVIATE_PORT). - Permission issues: Ensure correct permissions on SSL certificates and config files.
- Docker resources: Ensure Docker has at least 4 GB of memory allocated.
- Missing models: Check Pyris logs for
No model found for ...warnings and add the missing model tollm_config.yml.
Athena Service
The semi-automatic text assessment relies on the Athena service, which is part of the EduTelligence suite. To enable automatic text assessments, special configuration is required:
Athena is now part of the EduTelligence suite. Please check the compatibility matrix to ensure you're using compatible versions of Artemis and EduTelligence.
Enable the athena Spring profile
--spring.profiles.active=dev,localci,localvc,artemis,scheduling,buildagent,core,local,athena
Configure API Endpoints
The Athena service is running on a dedicated machine and is addressed via HTTP. We need to extend the configuration in the file src/main/resources/config/application-artemis.yml like so:
artemis:
# ...
athena:
url: http://localhost:5100
secret: abcdef12345
modules:
# See https://github.com/ls1intum/edutelligence/tree/main/athena for a list of available modules
text: module_text_cofee
programming: module_programming_themisml
The secret can be any string. For more detailed instructions on how to set it up in Athena, refer to the Athena documentation.
Nebula Setup Guide
Nebula is part of the EduTelligence suite. For a successful integration, ensure that the versions you deploy remain compatible with your Artemis release by consulting the EduTelligence compatibility matrix.
Overview
Nebula provides AI-powered processing pipelines for lecture videos. Artemis currently integrates with the Transcriber service to automatically generate lecture transcripts (including slide-number alignment) for attachment video units.
Artemis Configuration
Prerequisites
- A running Artemis instance with the
schedulingSpring profile enabled. The scheduler polls Nebula every 30 seconds for finished jobs. - A shared secret that will be used both in Artemis and in the Nebula gateway.
- A running instance of GoCast (TUM-Live) where the lecture recordings are hosted in a public course.
Enable the Nebula module
Set the Nebula toggle, URL, and shared secret in your Artemis configuration. The secret you specify here must match the API key that Nebula expects in the Authorization header.
artemis:
nebula:
enabled: true
url: https://nebula.example.com
secret-token: your-shared-secret
tum-live:
# Ensure this URL points to your GoCast (TUM-Live) instance. Add the /api/v2 suffix.
api-base-url: https://api.tum.live.example.com/api/v2
Artemis uses server.url internally when contacting Nebula. Make sure the property reflects the external URL of your Artemis instance.
Starting transcriptions inside Artemis
After Nebula is configured, instructors can launch transcriptions from the lecture unit editor. See the lecture management guide for the detailed, instructor-facing workflow.
Nebula Service Deployment
Prerequisites
- Python 3.12
- Poetry
- FFmpeg available in
PATH - Docker and Docker Compose if you plan to run the API gateway or production deployment
Local development workflow
The EduTelligence repository provides step-by-step instructions for running Nebula locally. Follow the Quick Start for Developers section and start only the Transcriber Service (skip the FAQ service). When configuring the Nginx gateway, make sure the map block includes the same shared secret you configured in Artemis. Once running, the relevant health checks are at:
curl http://localhost:3007/transcribe/health
curl http://localhost:3007/health
Production deployment
For production deployments, follow the Production Deployment guide in the Nebula README and provision only the transcriber container plus the nginx gateway. Use distinct configuration files (.env, llm_config.production.yml, nginx.production.conf), keep Whisper / GPT-5 credentials secure, and omit the FAQ service unless you plan to experiment with rewriting features.
After the stack is up, verify that the gateway responds:
curl https://nebula.example.com/health
curl -H "Authorization: your-shared-secret" https://nebula.example.com/transcribe/health
Connecting Artemis and Nebula
When both sides are configured:
- Artemis sends transcription jobs to
POST /transcribe/startwith the shared secret. - Nebula processes the job asynchronously and exposes status updates via
GET /transcribe/status/{jobId}. - The Artemis scheduler polls Nebula every 30 seconds and persists completed transcripts on the associated lecture unit.
If Artemis reports unauthorized or internal errors when starting a job, double-check that:
artemis.nebula.enabled=true- The
artemis.nebula.urlmatches the Nginx gateway URL (including scheme) - The Artemis secret token equals the key configured in Nginx (
map $http_authorization $api_key_valid) - Nebula can reach its configured Whisper and GPT-5 endpoints (inspect the transcriber logs for HTTP 401/429/500 responses)
With those pieces in place, instructors can automatically transcribe lecture recordings stored on TUM-Live without manual copy/paste workflows.
Other Extension Services
Text Assessment Analytics Service
Text Assessment Analytics is an internal analytics service used to gather data regarding the features of the text assessment process. Certain assessment events are tracked:
- Adding new feedback on a manually selected block
- Adding new feedback on an automatically selected block
- Deleting a feedback
- Clicking to resolve feedback conflicts
- Clicking to view origin submission of automatically generated feedback
- Hovering over the text assessment feedback impact warning
- Editing/Discarding an automatically generated feedback
- Clicking the Submit button when assessing a text submission
- Clicking the Assess Next button when assessing a text submission
These events are tracked by attaching a POST call to the respective DOM elements on the client side. The POST call accesses the TextAssessmentEventResource which then adds the events in its respective table. This feature is disabled by default. We can enable it by modifying the configuration in the file: src/main/resources/config/application-artemis.yml like so:
info:
textAssessmentAnalyticsEnabled: true
Apollon Service
Apollon Converter is needed to convert models from their JSON representation to PDF. Special configuration is required:
Enable the apollon Spring profile
--spring.profiles.active=dev,localci,localvc,artemis,scheduling,buildagent,core,local,apollon
Configure API Endpoints
The Apollon conversion service is running on a dedicated machine and is addressed via HTTP. We need to extend the configuration in the file src/main/resources/config/application-artemis.yml like so:
apollon:
conversion-service-url: http://localhost:8080
Hermes Service
Push notifications for the mobile Android and iOS clients rely on the Hermes service. To enable push notifications the Hermes service needs to be started separately and the configuration of the Artemis instance must be extended.
Configure and start Hermes
To run Hermes, you need to clone the repository and replace the placeholders within the docker-compose file.
The following environment variables need to be updated for push notifications to Apple devices:
APNS_CERTIFICATE_PATH: String - Path to the APNs certificate .p12 file as described hereAPNS_CERTIFICATE_PWD: String - The APNS certificate passwordAPNS_PROD_ENVIRONMENT: Bool - True if it should use the Production APNS Server (Default false)
Furthermore, the <APNS_Key>.p12 needs to be mounted into the Docker under the above specified path.
To run the services for Android support the following environment variable is required:
GOOGLE_APPLICATION_CREDENTIALS: String - Path to the firebase.json
Furthermore, the Firebase.json needs to be mounted into the Docker under the above specified path.
To run both APNS and Firebase, configure the environment variables for both.
To start Hermes, run the docker compose up command in the folder where the docker-compose file is located.
Artemis Configuration
The Hermes service is running on a dedicated machine and is addressed via HTTPS. We need to extend the Artemis configuration in the file src/main/resources/config/application-artemis.yml like:
artemis:
# ...
push-notification-relay: <url>
Hyperion Service
This documentation has been migrated to the new documentation system. Please refer to the Hyperion Setup Guide.
Aeolus Service
Aeolus is a service that provides a REST API for the Artemis platform to generate custom build plans for programming exercises. It is designed to be used in combination with the Artemis platform to provide build plans in multiple CI systems, currently Jenkins and LocalCI.
This section outlines how to set up Aeolus in your own Artemis instance.
Prerequisites
- Ensure you have a running instance of Artemis.
- Set up a running instance of Aeolus. See the Aeolus documentation for more information.
Enable the aeolus Spring profile
--spring.profiles.active=dev,localci,localvc,artemis,scheduling,buildagent,core,local,aeolus
Configure the Aeolus Endpoint
The Aeolus service can run on a dedicated machine since Artemis accesses it via a REST API call. We need to extend the configuration in the file src/main/resources/config/application-artemis.yml to include the Aeolus endpoint. How to do this is described in the configuration documentation for Aeolus.
Setup Guide for Exchange with the Sharing Platform
Background
Sharing Platform is an open platform for sharing teaching materials related to programming. It is operated by the University of Innsbruck. While primarily designed as an open exchange platform, it also provides features such as private group exchanges and the ability to restrict public access to certain content, such as the solution repository of an Artemis exercise.
For more details, visit help menu of the sharing platform.
To facilitate the exchange of programming exercises among instructors, the sharing platform offers a connector to Artemis, enabling any Artemis instance to integrate with the platform for seamless sharing.
The Sharing Platform is open source. The source code can be found at https://sharing-codeability.uibk.ac.at/development/sharing/codeability-sharing-platform.
Prerequisites
To connect to the sharing platform, you need an API key. To request one, contact the platform maintainers at artemis-support-informatik@uibk.ac.at and provide the URL of your active Artemis instance.
Important: Sharing only works if your Artemis instance is accessible on the internet. If making your instance publicly available is not an option, the maintainers can provide a list of required Artemis URLs that must be accessible to the sharing platform.
Configuration
Once you receive your API key, you should add it to the configuration file application-core.yml or your .env file:
Option 1: application-artemis.yml
artemis:
sharing:
enabled: true
# Shared common secret
apikey: <your API Key>
serverurl: https://search.sharing-codeability.uibk.ac.at/
actionname: Export to Artemis@myUniversity
Option 2: .env file for Docker initialization
ARTEMIS_SHARING_ENABLED=true
ARTEMIS_SHARING_SERVERURL=https://search.sharing-codeability.uibk.ac.at/
ARTEMIS_SHARING_APIKEY=<Enter your API Key here>
ARTEMIS_SHARING_ACTIONNAME=Export to Artemis@<Enter an ID here>
Once configured, restart your Artemis instance.
Instructor Access Requirements
For instructors to exchange programming exercises, they need an account on the sharing platform. They can register using one of the following methods:
- EduID Authentication: The simplest way is through EduID (Austria) or EduID (Germany). Forward the necessary connection details to the sharing platform maintainers.
- GitLab-Based Registration: If EduID is not an option, users can register via the sharing platform's GitLab instance. However, for security reasons, self-registration is restricted to certain email domains. To enable access, forward the desired domains to the maintainers for approval.
Troubleshooting
To assist in troubleshooting, the sharing profile includes an additional health indicator, accessible via the Administration -> Health menu.
Under Details, you will typically find the following entries:
- The first entry is an initialization request sent after startup.
- The second entry reflects the subsequent receipt of the connector configuration from the sharing platform.
- Additional entries represent regular configuration polling requests from the sharing platform.
The Details log stores the last 10 entries.
If the health status is not up, check the error message in the details. If the issue is unclear, feel free to contact the sharing platform maintainers for support.
Conclusion
Once everything is set up correctly, you should see the
Similarly, the
Before testing the import and export functionality, refer to the user documentation for further details.

