Autograde Tests Not Running At All

I’m trying to get my first Classroom assignment up and running, but I’m having trouble getting Autograder to work.

I’ve set up Pytest for the assignment repo and running the test locally works, so I used the invite link to create a “test student.” I set up the assignment itself with the “run Python” autograde option, where it installs pytest with sudo -H pip3 install pytest and runs it with python3 -m pytest (I also tried just pytest).

However, when I uploaded the assignment as the “test student,” all I get is “No tests have been run.” I don’t get any kind of “view tests” button, no link to a “workflows/autograding” page, nothing. I have zero feedback about why my tests aren’t running.

The documentation has been of no help on this, since it appears to assume it just works.

What am I doing wrong?

Same here. I pushed the assignment with my own test account, and passed the test. However, in Classroom, I cannot see the test result. The message below the student identifier is just “No test have been run”.

Over the course of tinkering with things, I managed to get it working.

Wanting to have some way to auto-run the tests on GitHub, I added my own Python test running workflow. This had the happy side effect of actually giving me useful information about why it didn’t run any tests. It turns out that Pytest didn’t like my file structure and wasn’t discovering my tests.

My file structure now looks like this, which is what Pytest calls “src folder structure” or somesuch:

image

I think this is what ultimately got it working, but I did also change how I set up the workflows and autograding, which seems to better ensure that I get feedback if/when something goes wrong with the workflow/autograder itself. Instead of going through the GUI when setting up an assignment, I added the Autograding workflow straight to my Workflows folder in the repo itself by dissecting the one that the GUI adds to the student repo.

Then, when I set up my assignment, I just bypass the “add tests” part entirely. GitHub’s tooling goes off of the presence of the workflow configuration files and picks up the tests that way.

So, in my .github folder, my structure now looks like this:

image

autograding.json contains the tests I want to run, in JSON format (this is where the fields in the GUI end up when you go that route), something like this:

{
    "tests": [
      {
        "name": "Test",
        "setup": "",
        "run": "pytest",
        "input": "",
        "output": "",
        "comparison": "included",
        "timeout": 5,
        "points": 1
      }
    ]
  }

I don’t run the setup here, because I have that in the main workflow file, so that pip install part is done just once, to help optimize Action minutes. Now, my main workflow file (classroom.yml - I kept the file names in case Autograder/GH Classrooms does something weird like look specifically for that file to know it’s a classroom assignment) looks like this:

    name: GitHub Classroom Workflow

    on:
      push:
        branches: [ main ]

    jobs:
      build:
        name: Autograding
        runs-on: ubuntu-latest
        steps:
          - uses: actions/labeler@v2
            with:
              repo-token: "${{ secrets.GITHUB_TOKEN }}"
          - uses: actions/checkout@v2
          - name: Set up Python 3.8.5
            uses: actions/setup-python@v2
            with:
              python-version: 3.8.5
          - name: Install dependencies
            run: |
              python -m pip install --upgrade pip
              pip install flake8 pytest
              if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
          - name: Lint with flake8
            run: |
              # stop the build if there are Python syntax errors or undefined names
              flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
              # exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
              flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
          - uses: education/autograding@v1

There’s a bit more to this than just Autograding, like the labeler action and Flake8 linting. Those two aren’t required, but they are useful checks for me (the labeler is set up to detect changes in the tests folder and add a tag to the PR; the linter stops the entire thing on syntax errors). The keys here are the checkout, setup-python, install dependencies and, of course, autograding action steps.

setup-python and the Install Dependencies step get us the most recent version of Python (or whatever specific one you want) and puts it into the PATH, then installing Pytest and (in my case) Flake8, giving Autograder access to Pytest without any additional setup within the Autograder part, leaving it free to only worry about managing the test running itself.

The really nice thing about this setup is that you don’t have to re-declare all the test/Autograding stuff if you use this repo in other classes. It all lives in the repo itself. (You can do this for the Input/Output tests, too.) This is especially useful if you split the autograding into smaller test suites for more granular autograding data.

Note for VSCode users: Getting VSCode to cooperate with having these assignments in one workspace was as much of a pain as getting Autograder working. This setup actually works beautifully with VSCode’s Python and testing tools…as long as you set it up as multi-root (the “add folder to workspace…” option for each assignment). There might be a way to get Tasks to pick up folders in a common root and add them automatically as new roots to the workspace, but I haven’t dug that deep yet.

2 Likes

Thank you for sharing this with a very detailed explanation!

Unfortunately, I am still in trouble. When I pushed, all work fine. It works as specified in autograding.json and classroom.yml. However, the test results cannot find on GH Classroom. “No tests have been run” message is still shown.

Do you have any idea about this? It looks like the test results are not delivered to GH Classroom.

Where did you push the code? The template repo or the student repo?

If you pushed it to the template repo, student repos don’t get updates, I don’t think. You have to at least accept the assignment as a “different” student for it to pick up.

Same here, I tried following your steps

  • set up a workflow in template repo (classroom.yml and autograding.json)
  • use the ‘src test’ file structure
  • skip the GUI for the autograder

By doing this with a fresh assignment, I don’t even get the “No tests have been run” message.
I can only see the number of commits, even though “education/autograding@v1” is running correctly with Github Actions. Did I miss something?

Then, If I add an autograder test by updating the assignment with the GUI, The workflow is overwritten with an automated commit and I’m back to the “No tests have been run” message.

Of course, I pushed the code to the student repository created from the template repository. It seems that education/autograding action does not work as intended.

I am under exactly the same situation!

With the template repo containing a workflow setup, I don’t get the “No tests have been run” message as well. I tried the followings:

  • set the template repo public/private
  • make the student repo as public/private
  • grant a student admin access or not

None of above works. I have no idea why it does not work.

Without seeing your repositories, I don’t know that I can help much more.

The first thing that comes to mind is to make sure your folder structure is correct and that Pytest can pick up the tests locally. If the files/folders are set up correctly, you can add the assignment folder to VSCode and running Pytest via the Python extension will pick up the tests.

The second thing is to make sure the .github folder structure is correct. The correct structure should be .github -> classroom -> autograding.json for the test definitions (make sure your JSON is correct, too) and .github -> workflows -> classroom.yml for the workflow file.

If you can see the action being run in the Actions screen, then see if there are any errors happening.

© 2017 GitHub, Inc.
with by
GitHub Education