Hiding tests from students

I think I figured it out lol. It took making a bunch of encrypted secrets (to hide like everything from the student with org secrets), remotely checking out my master solution repo (I even encrypted the name of that, students are smart lol), copying the test (even encrypted the test name) into the student’s runner, doing the test. It works. really well actually.

I just wonder how fool-proof it is. Or, how safe it is. I even overwrite the test itself when the workflow is done, just because lol.

Can someone chime in on security or opinions?

I ‘upgraded’ an assignment I’ve shared on this forum already. Open it up, there’s not even a test in the repo. But as soon as the student pushes their solution code, the test runs.

Repo: https://github.com/Professor-Ruiz/CS1030-Lesson-10

name: Build and Test

    paths: 'src/exercise.py'

    runs-on: ubuntu-latest
    - name: Open remote
      uses: actions/checkout@v2
        repository: ${{ secrets.TEN_THIRTY }}            
        token: ${{ secrets.PERSONAL_ACCESS_TOKEN }}
    - name: Get remote
      uses: actions/upload-artifact@v2
        name: ${{ secrets.T_E }}                          
        path: lesson10/${{ secrets.T_E }}                
        retention-days: 1 
    - name: Open current
      uses: actions/checkout@v2
    - name: Remove current
      run: if [ -f $secrets.T$secrets.T_E ]; then rm $secrets.T$secrets.T_E; fi
    - name: Upload remote
      uses: actions/download-artifact@v2
        name: ${{ secrets.T_E }}                       
        path: ${{ secrets.T_N_S }}                      
    - name: Set up Python 3.x
      uses: actions/setup-python@v2
        python-version: '3.x'
    - name: Install dependencies
      run: |
        python -m pip install --upgrade pip
        pip install pytest flake8
    - name: Lint with flake8
      run: |
        # stop the build if there are Python syntax errors or undefined names
        flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
        # exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
        flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
    - name: Test with pytest
      run: python3 -m pytest
    - name: Clear
      run: echo ' ' > **/${{ secrets.T_E }}
    - name: Update
      uses: actions/upload-artifact@v2
        name: ${{ secrets.T_E }}
        path: ${{ secrets.T_N_S }}

Ooh. I’m going to have to try that. :slight_smile: Thank-you for sharing.

Could students update Github Workflows file with something like cat <test file name> in this setup? If file name is also a secret, they can probably do something like ls -la first and then cat it. Will this work?

The fact that students can modify CI script in the repository and print everything to the build log has stopped me from bothering with hiding tests. Although I must admit that I researched this topic a couple of years ago with Travis CI. May be GH Workflows has something different.

My current “still in development” solution is to use GitHub API to check if test files in the repository were modified by any user except me or TAs. If so, I get a notification. Although in my opinion one or two students who are smart enough to modify tests and/or CI script should not be punished as long as they don’t share this hack with others. Fighting students who submit someone else’s perfectly working code is a bigger issue for me.

1 Like

I guess my original intention wasn’t necessarily to “hide” the test, but to test the student’s submission with a clean untampered version of said test.

I wonder if I could do a file-level git revert in the runner to get the original version of the test and use that for testing? It seems so simple… I’ll report back lol

But I do feel like with enough motivation, a student could dig deep enough to cheat the test. But with that level of skill, would that particular student need to cheat anyways?

Also, what do you use for notifications? A webhook? Dude, curl is like a superpower

I’m using Google spreadsheet to record student progress. I have a bunch of python scripts that use GitHub API and Google spreadsheets API to automatically fill the spreadsheet with students’ progress on all assignments. If a student does something bad with the assignment, this information goes to the corresponding cell in the spreadsheet.

The structure of the spreadsheet is pretty simple: rows start with student name, columns correspond to different assignments. Last column sums grading points earned by a student over entire course. Students have readonly access to the spreadsheet, so they can monitor their progress as well.

I must say we do have an LMS at our university. Actually, we have two of them: the main one is developed in house, the other one is Moodle. Unfortunately, there is no way I can automate any interactions with those systems due to lack of functionality of the in-house one and bureaucracy of the department that manages Moodle instance. So I had to come up with a way to extract information about student’s progress from GitHub and find a place to store it. My initial goal was to use publicly available resources to setup my classroom in order to save time on managing a dedicated host with DBMS, web-app and other stuff. At the time Google spreadsheets looked like a good solution and I haven’t changed my mind yet.

So interesting all your comments!!

I use Java as language and I came to the conclusion it is was not worth it to try to hide the tests.

Students can learn from them.
Tests can guide development, as in TDD.
A Student with knowledge to understand some Tests and Github Action Scripts, it is unlikely that needs cheating.

Anyway, as @markpolyak commented, I usually check if tests and Action scripts are untuched. It’d be nice if I can automate that task…

Some randomize strategy to prevent students from mocking test expected behavior, is also a good practice.

© 2017 GitHub, Inc.
with by
GitHub Education