I’ve started to look into how to do this with SQL Server.
I’m adding the docker image of mssql to the classroom action runner. Then using the input/output grading test to run queries (via sqlcmd) that output to stdout. The output can then be compared using the grading test with an expected output, either ‘included’, ‘exact’, or ‘regex’. This allows each test to be granularly marked.
I have a proof-of-concept here: https://github.com/markpatterson27/Simple-SQL-Exercise if you want to have a look at the workflow and autograding in the .github folder.
The principles for MySQL should be the same.
I believe MySQL is already installed on the runner but is disabled by default. (GitHub classrooms uses the ubuntu-latest runner). You’ll need to add
run: sudo systemctl start mysql.service to the classroom.yml workflow. See: https://github.com/actions/virtual-environments/blob/main/images/linux/Ubuntu1804-README.md#mysql
Then using input/output autograding test:
run command: something like
mysql --user="root" --password="root" --database="<database name>" --execute="<SQL statements to generate test>"
Expected output: something to compare the SQL statement output to.
You can probably add
sudo systemctl start mysql.service to the setup field of the first autograding test, before importing the students work into MySQL. That would save you having to modify the workflow file.
pytest scripts would make it easier to parse the output from SQL queries and write exact tests, but then you did write think in capitals…
TL;DR: use bash to run SQL queries and compare the output using the input/output autograding test.