Skip to main content

E2E Testing with Playwright

Background

The Playwright test suite contains system tests verifying the most important features of Artemis. System tests test the whole system and therefore require a complete deployment of Artemis first. In order to prevent as many faults (bugs) as possible from being introduced into the develop branch, we want to execute the Playwright test suite whenever new commits are pushed to a Git branch (just like the unit and integration test suites).

Artemis uses GitHub Actions as its CI platform. E2E tests run on self-hosted runners because they require substantial compute resources, access to a PostgreSQL database, and a locally deployed Artemis instance. To keep feedback cycles short, the pipeline uses a two-phase execution model: Phase 1 runs only the tests mapped to the files changed in the pull request, and Phase 2 runs the remaining tests only when Phase 1 passes. See the CI Pipeline section for details.

Set up Playwright locally

To run the tests locally, developers need to set up Playwright on their machines. End-to-end tests test entire workflows; therefore, they require the whole Artemis setup - database, client, and server to be running. Playwright tests rely on the Playwright Node.js library, browser binaries, and some helper packages.

The recommended way to run E2E tests locally is the fast local runner — a single script (./run-e2e-tests-local-fast.sh) that handles database, server, client, and Playwright setup automatically. It auto-kills conflicting processes on ports 8080/9000, keeps services running between re-runs, and supports filtering individual tests.

Alternatively, you can use the Docker-based scripts in supporting_scripts/playwright (see the README there), or set up everything manually as described below. The manual setup relies on fully configuring your local Artemis instance following the server setup guide and is useful when you need IntelliJ integration for debugging.

Supporting scripts overview

The following helper scripts live in supporting_scripts/playwright and can be combined as needed:

  • runArtemisInDocker_macOS.sh / runArtemisInDocker_linux.sh: start the Artemis server/client plus database in Docker.
  • setupUsers.sh: create the Playwright test users.
  • startPlaywright.sh: run the full Playwright suite headless in the terminal.
  • startPlaywrightUI.sh: launch Playwright in UI mode for debugging.
  • prepareVSCodeForE2ETests.sh: install dependencies and patch the Playwright config so VS Code can discover and run the E2E tests directly from the Testing view.
  • checkFlakiness.sh: repeatedly run tests to investigate flaky behavior.

The three steps for setting up Playwright with these supporting scripts are:

  1. Start Artemis by running runArtemisInDocker_macOS.sh or runArtemisInDocker_linux.sh (depending on your OS). This starts the database, server, and client; keep the client running and use a separate shell for the remaining steps.
  2. Create test users by running setupUsers.sh (skip if users already exist).
  3. Run tests either headless with startPlaywright.sh, in UI mode with startPlaywrightUI.sh, or via VS Code after running prepareVSCodeForE2ETests.sh.

For the fastest local development experience, use the run-e2e-tests-local-fast.sh script in the repository root. This is the recommended way to run E2E tests locally for both developers and AI agents. Unlike the Docker-based approach above, this script runs only PostgreSQL in Docker while running the server and client directly on the host. Services stay running between test runs, so re-runs only take seconds.

The script automatically kills any processes occupying ports 8080 (server), 9000 (client), and 7921 (local VC SSH), so you don't need to manually stop conflicting processes before running it.

Prerequisites: Docker, Java 25+, Node.js 24+, npm.

Architecture:

  • Docker: PostgreSQL only (port 5432)
  • Host: Spring Boot server via ./gradlew bootRun -x webapp (port 8080)
  • Host: Angular client via npm start (port 9000, proxies API calls to 8080)
  • Host: Playwright tests connect to http://localhost:9000

First run (starts everything):

./run-e2e-tests-local-fast.sh

This starts PostgreSQL, boots the server (~30-90s), starts the client (~10-20s), then runs all Playwright tests.

Run specific tests:

Use --filter with a pattern (supports regex) to run only matching tests:

# Run tests with "Quiz" in the name
./run-e2e-tests-local-fast.sh --filter "Quiz"

# Run tests matching multiple patterns
./run-e2e-tests-local-fast.sh --filter "ExamAssessment|SystemHealth"

Re-run tests (services already running):

After the first run, services stay running. Skip startup for faster re-runs:

# Run all tests (skips starting services that are already running)
./run-e2e-tests-local-fast.sh --skip-server --skip-client --skip-db

# Run only specific tests
./run-e2e-tests-local-fast.sh --skip-server --skip-client --skip-db --filter "Quiz"

# Open Playwright UI for debugging
./run-e2e-tests-local-fast.sh --skip-server --skip-client --skip-db --ui

Stop everything:

./run-e2e-tests-local-fast.sh --stop

Available options:

FlagDescription
--stopKill server, client, and database; exit
--filter <pattern>Run only tests matching the pattern (supports regex, e.g., "Quiz" or "Quiz|Exam")
--skip-serverReuse already-running server
--skip-clientReuse already-running client
--skip-dbReuse already-running PostgreSQL
--headedRun Playwright in headed mode (visible browser)
--uiOpen Playwright UI mode for interactive debugging
--videoEnable video recording (off by default to save CPU)
--coverageEnable coverage collection (off by default, requires extra memory)
--debugShow server and client output inline (normally only in log files)

Logs and PID files are stored in the .e2e-local/ directory (gitignored).

If you want to manually install playwright, you can follow these steps:

1. Install dependencies

First, navigate to the Playwright folder:

cd src/test/playwright

Then install the dependencies:

npm install

2. Customize Playwright configuration

We need to configure Playwright to match our local Artemis setup and user settings. All configurations are stored in the playwright.env file. The default configuration for an ICL setup looks as follows:

PLAYWRIGHT_USERNAME_TEMPLATE=artemis_test_user_
PLAYWRIGHT_PASSWORD_TEMPLATE=artemis_test_user_
ADMIN_USERNAME=artemis_admin
ADMIN_PASSWORD=artemis_admin
ALLOW_GROUP_CUSTOMIZATION=true
STUDENT_GROUP_NAME=students
TUTOR_GROUP_NAME=tutors
EDITOR_GROUP_NAME=editors
INSTRUCTOR_GROUP_NAME=instructors
BASE_URL=https://localhost
EXERCISE_REPO_DIRECTORY=test-exercise-repos
FAST_TEST_TIMEOUT_SECONDS=45
SLOW_TEST_TIMEOUT_SECONDS=90

Make sure BASE_URL matches your Artemis client URL and ADMIN_USERNAME and ADMIN_PASSWORD match your Artemis admin user credentials.

3. Configure test users

Playwright tests require users with different roles to simulate concurrent user interactions. If you already have generated test users, you can skip this step. Generate users with the help of the user creation scripts under the supporting_scripts/playwright folder:

setupUsers.sh

You can configure user IDs and check their corresponding user roles in the src/test/playwright/support/users.ts file. Usernames are defined automatically by appending the userId to the PLAYWRIGHT_USERNAME_TEMPLATE. At the moment it is discouraged to change the template string, as the user creation script does not support other names yet.

4. Setup Playwright package and its browser binaries

Install Playwright browser binaries, set up the environment to ensure Playwright can locate these binaries. On some operating systems this might not work, and playwright needs to be manually installed via a package manager.

npm run playwright:setup-local
npm run playwright:init

5. Open Playwright UI

To open the Playwright UI, run:

npm run playwright:open

This opens a graphical interface that allows you to run individual tests, test files, or test suites while observing the test execution in a browser window.

Another way to run tests is through the command line. To run all tests in the command line, use:

npm run playwright:test

To run a specific test file, use:

npx playwright test <path_to_test_file>

If you want to run a specific test suite or a single test, add the -g flag to the previous command, followed by the test suite name or test name. For example, you can run the test suite "Course creation" located in the file CourseManagement.spec.ts using the command:

npx playwright test e2e/course/CourseManagement.spec.ts -g "Course creation"

Test parallelization

Running tests in parallel may speed up test execution. We achieve this using Playwright's built-in parallelization feature. By default, tests are configured to run in fully parallel mode. This means that all tests in all files are executed in parallel. Test execution tasks are divided among worker processes. Each process runs a separate browser instance and executes a subset of tests. The number of worker processes can be adjusted in the playwright.config.js file.

To run tests sequentially (one after another), set the workers option to 1. To run tests within each file sequentially, while running test files in parallel, set the fullyParallel option to false.

Test projects: fast and slow tests

Playwright is configured with two test projects that determine the per-test timeout budget:

ProjectTagTimeout (local)Timeout (CI)Use for
fast-tests@fast or no tag45 s60 sTests that complete quickly and do not involve long-running background jobs
slow-tests@slow90 s90 sTests that wait for CI/CD builds, large data processing, or other asynchronous operations

Tag a test suite with { tag: '@slow' } when it involves build pipelines or polling for background jobs; otherwise omit the tag (defaults to fast-tests):

// Fast test (default — no tag needed)
test.describe('Competency Management', { tag: '@fast' }, () => { ... });

// Slow test
test.describe('Programming Exercise Participation', { tag: '@slow' }, () => { ... });

Best practices when writing new E2E tests

Understanding the System and Requirements

Before writing tests, a deep understanding of the system and its requirements is crucial. This understanding guides determining what needs testing and what defines a successful test. The best way to understand is to consolidate the original system's developer or a person actively working on this component.

Identify Main Test Scenarios

Identify what are the main ways the component is supposed to be used. Try the action with all involved user roles and test as many different inputs as feasible.

Identify Edge Test Scenarios

Next to the main test scenarios, there are also edge case scenarios. These tests include inputs/actions that are not supposed to be performed (e.g. enter a too-long input into a field) and test the error-handling capabilities of the platform.

Write Tests as Development Progresses

Rather than leaving testing until the end, write tests alongside each piece of functionality. This approach ensures the code remains testable and makes identifying and fixing issues as they arise easier.

Keep Tests Focused

Keep each test focused on one specific aspect of the code. If a test fails, it is easier to identify the issue when it does not check multiple functionalities at the same time.

Make Tests Independent

Tests should operate independently from each other and external factors like the current date or time. Each test should be isolated. Use API calls for unrelated tasks, such as creating a course, and UI interaction for the appropriate testing steps. This also involves setting up a clean environment for every test suite.

Use Descriptive Test Names

Ensure each test name clearly describes what the test does. This strategy makes the test suite easier to understand and quickly identifies which test has failed.

Use Similar Test Setups

Avoid using different setups for each test suite. For example, always check for the same HTTP response when deleting a course.

Do Not Ignore Failing Tests

If a test consistently fails, pay attention to it. Investigate as soon as possible and fix the issue, or update the test if the requirements have changed.

Regularly Review and Refactor Your Tests

Tests, like code, can accumulate technical debt. Regular reviews for duplication, unnecessary complexity, and other issues help maintain tests and enhance reliability.

Playwright testing best practices

1. Use page objects for common interactions

Page objects are a design pattern that helps to abstract the details of the page structure and interactions. They encapsulate the page elements and their interactions with the page. This makes the tests more readable and maintainable. Page objects are stored in the support/pageobjects folder. Each page object is implemented as a class containing a Playwright page instance and may have instances of other page objects as well. Page object classes provide methods performing common user actions or returning frequently used locators. Page objects are registered as fixtures to make them easily accessible in tests without caring about their initialization and teardown.

2. Use fixtures

Test fixture in Playwright is a setup environment that prepares the necessary conditions and state required for your tests to run. It helps manage the initialization and cleanup tasks so that each test starts with a known state. We use fixtures for all POMs and common test commands such as login. Fixtures are defined in support/fixtures.ts.

To create a fixture, define its instance inside a corresponding existing type or define a new one:

export type ArtemisPageObjects = {
loginPage: LoginPage;
}

Ensure the base test (base) extends the fixture type. Define a fixture with the relevant name and return the desired instance as an argument of use() function as below:

export const test = base.extend<ArtemisPageObjects>({
loginPage: async ({ page }, use) => {
await use(new LoginPage(page));
}
});

Inject the fixture to tests when needed as an argument to the test() function as follows:

test('Test name', async ({ fixtureName }) => {
// Test code
});

3. Use uniquely identifiable locators

Use unique locators to identify elements on the page. Playwright throws an error when interacting with a locator that matches multiple elements on the page. To ensure uniqueness, use locators based on the element's data-testid, id, unique class or a combination of these attributes.

Avoid using the nth() method or the nth-child selector, as they rely on the element's position in the DOM hierarchy. Use these methods only when iterating over multiple similar elements.

Avoid using locators that are prone to change. If a component lacks a unique selector, add a data-testid attribute with a unique value to its template. This ensures that the component is easily identifiable, making tests less likely to break when there are changes to the component.

Prefer data-testid over text-based and role-based locators as the primary strategy. Text-based and role-based locators break whenever display text or component structure changes — turning a UI rename into a test failure unrelated to any regression. A data-testid attribute survives UI refactors as long as the attribute itself is preserved.

4. Consider actionability of elements

Checking for the state of an element before interacting with it is crucial to avoid flaky behavior. Actions like clicking a button or typing into an input field require a particular state from the element, such as visible and enabled, which makes it actionable. Playwright ensures that the elements you interact with are actionable before performing such actions.

However, some complex interactions may require additional checks to ensure the element is in the desired state. For example, consider a case where we want to access the inner text of an element that is not visible yet. Use waitFor() function of a locator to wait for its visible state before accessing its inner text:

await page.locator('.clone-url').waitFor({ state: 'visible' });
const urlText = await this.page.locator('.clone-url').innerText();

In some cases, we may need to wait for the page to load completely before interacting with its elements. Use waitForLoadState() function to wait for the page to reach a specified load state:

await page.waitForLoadState('load');

5. AI-assisted test authoring with a Playwright MCP server

You can optionally use a Playwright Model Context Protocol (MCP) server as an external tool to expose the live application DOM to AI coding assistants such as GitHub Copilot and Claude Code while Artemis is running locally.

Without access to the running page state, AI assistants can still generate test structure and boilerplate, but they may produce incorrect locators. When a separately configured Playwright MCP server is active, the assistant can inspect the current DOM, retrieve the data-testid attributes attached to elements, and generate locators that reference those IDs directly.

To use this workflow, start Artemis locally (via the fast runner or any other method) and configure a Playwright MCP server in your editor or AI assistant environment. The assistant can then inspect page structure and generate accurate data-testid-based locators for the workflow under test. Review the generated test by running it locally, confirm that the interaction sequence matches the intended workflow, and correct any steps where the assistant misread the application state.

CI Pipeline

Overview: two-phase execution

Every pull request triggers a two-phase E2E pipeline. Phase 1 runs only the tests mapped to the files changed in the pull request; Phase 2 runs all remaining tests, but only when Phase 1 passes. This architecture ensures developers receive targeted feedback within minutes for the tests most likely to surface a regression, while still running the full suite before merge.

Two-phase CI pipeline activity diagram showing Developer, CI Pipeline, Helios, and Reports Dashboard swimlanes
Two-phase CI pipeline activity diagram. Phase 1 runs change-relevant tests first. If Phase 1 passes, Phase 2 runs the remaining tests. Both phases query Helios for flakiness scores and upload artifacts to the Reports Dashboard before posting the PR comment.

Observed performance (across 89 measured runs):

MetricPhase 1Phase 2Full suite (single-phase baseline)
Median duration4.3 min23.8 min27.7 min
P90 duration8.3 min29.4 min32.2 min
Pass rate82%64.4% (of runs that reach it)

When Phase 1 detects a failure, developers receive feedback within approximately four minutes and Phase 2 does not consume compute resources.

For pushes to develop or main: the pipeline skips the two-phase logic and runs the full suite in a single job.

Selective test execution and e2e-test-mapping.json

The determine-tests CI job reads .ci/E2E-tests/e2e-test-mapping.json to decide which tests to include in Phase 1. The mapping file lists, for each Artemis module, the source paths that belong to it and the test paths that cover it. All testPaths and allTestPaths entries are resolved relative to src/test/playwright/ (the Playwright test root).

{
"allTestPaths": [
"e2e/atlas/",
"e2e/course/",
"e2e/exam/ExamAssessment.spec.ts",
"e2e/exam/ExamChecklists.spec.ts",
"e2e/exam/ExamCreationDeletion.spec.ts",
"e2e/exam/ExamDateVerification.spec.ts",
"e2e/exam/ExamManagement.spec.ts",
"e2e/exam/ExamParticipation.spec.ts",
"e2e/exam/ExamResults.spec.ts",
"e2e/exam/ExamTestRun.spec.ts",
"e2e/exam/test-exam/",
"e2e/exercise/ExerciseImport.spec.ts",
"e2e/exercise/file-upload/",
"e2e/exercise/modeling/",
"e2e/exercise/programming/",
"e2e/exercise/quiz-exercise/",
"e2e/exercise/text/",
"e2e/lecture/",
"e2e/Login.spec.ts",
"e2e/Logout.spec.ts",
"e2e/SystemHealth.spec.ts"
],
"mappings": {
"atlas": {
"sourcePaths": [
"src/main/java/de/tum/cit/aet/artemis/atlas/",
"src/main/webapp/app/atlas/"
],
"testPaths": ["e2e/atlas/"]
},
"exam": {
"sourcePaths": [
"src/main/java/de/tum/cit/aet/artemis/exam/",
"src/main/webapp/app/exam/"
],
"testPaths": ["e2e/exam/"]
}
},
"alwaysRunTests": [
"e2e/Login.spec.ts",
"e2e/Logout.spec.ts",
"e2e/SystemHealth.spec.ts"
],
"runAllTestsPatterns": [
"src/main/resources/config/",
"docker/",
"build.gradle",
"angular.json"
]
}

The determine-tests job compares the PR branch against the base branch and maps each changed file to its module. The union of the test paths from all matched modules, plus the alwaysRunTests entries, forms the Phase 1 set. The remaining entries from allTestPaths form the Phase 2 set.

Special cases:

  • If a changed file matches a runAllTestsPatterns entry (e.g., a Docker or Gradle file), the pipeline runs all tests in a single job.
  • If only Playwright spec files were changed, Phase 2 is skipped and only Phase 1 runs.
  • If no module mapping is found for a changed file, it falls through to Phase 2.

Adding tests for a new module: add an entry to mappings in e2e-test-mapping.json and add the test path(s) to allTestPaths.

PR comment reporting

The pipeline posts a single comment on each pull request and updates it progressively as phases complete. A typical comment shows a phase table with status (✅ / ❌), test count, passed, skipped, failed, and wall-clock time per phase; the test file paths assigned to each phase; links to the workflow run and full HTML report; a collapsible section listing each failed test name and its duration; and a flakiness score table for any failed tests when Helios data is available.

Three outcomes are possible:

  1. Both phases pass — example: PR #12516
  2. Phase 1 passes, Phase 2 fails — example: PR #12489
  3. Phase 1 fails, Phase 2 skipped — example: PR #12487

Helios flakiness scores

After each phase, the pipeline queries the Helios API to retrieve flakiness scores for any failed tests. A flakiness score reflects how often a test has non-deterministically produced different results over recent CI runs. A high score suggests the failure may be a flake; a low score suggests a genuine regression. Scores appear in the PR comment alongside failed test names.

Reports Dashboard

The Reports Dashboard is a web interface that stores and serves all E2E test artifacts from CI runs, accessible from the link in the PR comment. It provides:

  • Aggregate view: total runs, 30-day pass rate, average flakiness rate, pass rate trend chart, and runtime-by-phase chart
  • Per-run view: phase breakdown, test-level results with status and duration, links to the full HTML and coverage reports
  • Failure detail view: the full assertion error, stack trace, and an embedded video recording of the browser at the moment of failure

Artemis Deployment on GitHub Runner

Every execution of the Playwright test suite requires the Artemis server and client to be deployed and started. The runner starts all required services using Docker Compose and executes Playwright tests on the host.

Hardware-software mapping showing five nodes: Developer Machine, GitHub, Self-Hosted CI Runner, Reports Dashboard, and Helios
Hardware-software mapping of the E2E testing infrastructure. The Developer Machine pushes to GitHub, which triggers the Actions Orchestrator. The Self-Hosted CI Runner executes tests and sends results to the Reports Dashboard and PR comments via the Report and Communication Handler. The Helios node provides flakiness scores.

E2E tests run on self-hosted runners (not GitHub-hosted) because they require substantial compute resources. Self-hosted runners do not start from a clean state between jobs; the pipeline explicitly cleans up leftover test result files between phases to prevent workspace pollution from inflating test counts.

In total there are two Docker containers started on the GitHub runner, plus Playwright running on the host:

1. PostgreSQL

This container starts a PostgreSQL database and exposes it on port 5432. The container automatically creates a new database Artemis and configures it with the recommended settings for Artemis.

2. Artemis Application

The Docker image for the Artemis container is created from the already existing Dockerfile. When the build of the Playwright test suite starts, it retrieves the Artemis executable (.war file). Upon creation of the Artemis Docker image the executable is copied into the image together with configuration files for the Artemis server.

The main configuration of the Artemis server is contained in the Playwright environment configuration files. Security-relevant settings are passed to the Docker container via environment variables and GitHub secrets.

The Artemis container is configured to depend on the PostgreSQL container and uses health checks to wait until the database is up and running.

3. Playwright

Playwright runs directly on the runner host (not in a container). The base URL is configurable via the BASE_URL environment variable read from src/test/playwright/playwright.env (defaults to https://localhost). The necessary configuration for the Playwright test suite is passed in via environment variables. Playwright only starts once Artemis has been fully booted.

Maintenance

The Artemis Dockerfile as well as the PostgreSQL image are already maintained because they are used in other Artemis Docker setups. Therefore, only Playwright and the Playwright Docker image require active maintenance. Since the Playwright test suite simulates a real user, it makes sense to execute the test suite with the latest browser versions. If you run Playwright inside Docker locally (using docker/playwright.yml), the Playwright Docker image has browsers at specific pinned versions. The docker-compose file should be updated monthly to pull the latest Playwright image with up-to-date browsers. This step does not apply to CI, where Playwright runs directly on the host (not in a container).

When a new Artemis module is created or an existing module is significantly reorganized, update .ci/E2E-tests/e2e-test-mapping.json:

  1. Add a new entry to mappings with the module's sourcePaths and testPaths.
  2. Add all new test paths to allTestPaths so Phase 2 covers them on unrelated PRs.

Functionalities Covered

CategoryFunctionalityDescription
AtlasCompetency ManagementCreating, editing, and deleting competencies; setting taxonomy levels and mastery thresholds; soft-deleting with linked exercise warnings
Competency–Exercise InteractionsLinking and unlinking exercises to competencies; verifying that exercise progress contributes to competency mastery
Competency–Lecture Unit InteractionsLinking lecture units to competencies; verifying lecture unit completion updates competency progress
Competency ImportImporting competencies from other courses; verifying imported data integrity
Learning Path ManagementEnabling learning paths in a course; verifying the learning path activation flow
Student Competency Progress ViewStudent competency overview, mastery indicators, judgment of learning ratings, and progress tracking across lecture units and exercises
CoursesCourse ManagementCreating, editing, deleting courses; adding/removing students from a course
Course ExerciseTests filtering exercises based on their title
Course CommunicationMessaging within courses, including channel creation, group chats, student participation, and message interactions
ExamsExam ManagementManage exam students and exercise groups
Exam Creation & DeletionCreating, editing, and deleting exams
Exam ParticipationEarly & normal hand-in, exam exercise participation for text, modeling, quiz, programming (Git SSH/HTTPS) exercises, exam announcements
Exam AssessmentAssessing modeling, text, quiz and programming exercise submissions in exams, including complaint handling
Exam ChecklistsExam setup checks, including student registration, exercise groups, and exam publication
Exam Date VerificationExams appear/disappear based on visibility dates
Exam ResultsExam result overviews for text, quiz, modeling, and programming exercises
Exam Test RunsCreating, managing, and deleting exam test runs
Exam StatisticsExam statistics is displayed correctly
PlantUML Diagram IsolationPlantUML diagrams in exam exercises render in isolation without leaking state between participants
Test ExamsTest Exam Creation & DeletionCreating and deleting test exams
Test Exam ManagementManaging test exam configuration and settings
Test Exam ParticipationParticipating in test exams as a student
Test Exam Student ExamsGenerating and managing individual student exam instances
Test Exam Test RunsCreating and managing test runs within a test exam
ExercisesExercise ImportImporting text, quiz, modeling & programming exercises
File Upload ExercisesManagementCreating and deleting file upload exercises
ParticipationStudents can participate in a file upload exercise
Assessment & FeedbackAssessing submissions, student feedback visibility, and complaint handling
Modeling ExercisesManagementCreating, editing, and deleting modeling exercises
Visibility ControlsStudents can access released/unreleased exercises
ParticipationStudents can start and submit models
Assessment & ComplaintsInstructor and tutor assessments, student feedback, and complaint resolution
Programming ExercisesManagementCreating and deleting programming exercises
Team ManagementForming and managing exercise teams
AssessmentAssessing programming exercise submissions
ParticipationSubmitting code through the code editor and Git (HTTPS & SSH), submissions for Java, C, and Python, team participation
Static Code AnalysisConfiguring SCA grading and handling submissions with SCA errors
Quiz ExercisesManagementCreating quizzes with multiple-choice, short-answer, and drag-and-drop questions
Deletion & ExportEnsures quizzes can be deleted and exported
ParticipationTests student participation in hidden, scheduled, and batch-based quizzes
AssessmentVerifies automatic assessment for multiple-choice and short-answer quizzes
Drag-and-Drop MechanicsEnsures correct placement of draggable quiz elements
Text ExercisesManagementCreating and deleting text exercises
ParticipationEnsures students can submit text exercises
Assessment & ComplaintsTests instructor assessments, feedback visibility, and complaint handling
LecturesLecture ManagementCreating and deleting lectures, managing existing lectures
AuthenticationLogging inLogging in via UI and programmatically, login failures
Logging outLogging out successfully and canceling logout
System StatusSystem status indicatorsContinuous integration & VC server health; Database, Hazelcast, and WebSocket health; Readiness and ping checks