What Kids (and Teachers) Get Wrong About Project-Based Learning
What Kids (and Teachers) Get Wrong About Project-Based Learning
I teach a project-based class in Data Science. We have no formal quizzes or tests. Each term, students do 2 or 3 projects that they are typically given 2-3 class periods to complete, with any additional time needed done for homework.
I’ve found that the phrase “project-based” is a huge selling point for kids. No tests or quizzes? Sign me up! And, it’s a huge barrier to entry for math teachers, who typically don’t run their class like, say, an art class (where, I would argue, a LOT of learning happens in ways that don’t often happen in math classrooms). A colleague of mine didn’t mince words when he told me that my class sounds like an “absolute nightmare” to manage.
But, I’ve held true to my project-based aspirations for this course, as I believe that Data Science is best learned when done authentically. Here, I’ll discuss what students get wrong about learning in a project-based environment (which is largely not their fault), and why teachers shouldn’t fear a project-based classroom.
What Kids Get Wrong
A class without traditional tests and quizzes is actually pretty hard. In fact, it’s just as hard as a class with frequent closed-note assessments, and - in some cases - much harder. Whenever I create Data Science project assignments, I always start with a sheet of minimum expectations and 4 or 5 datasets I’ve found that mesh well with those expectations. But, there is always a dedicated time during the first stages of the project for students to hunt for their own data sets that they find more interesting. In my experience, around 40-60% of students will pursue their own data using tools like Kaggle, government sites like census.gov, and others. In the name of authenticity and relevance, students have just made their own lives harder, because those datasets are typically messier and require more preprocessing than the datasets that I curate (and that’s before they’ve even started to work on the minimum expectations!).
But, students who choose their own datasets score better, on average, than students who choose from my pre-selected datasets. On a traditional, closed-note test, we would typically expect the questions with a lower difficulty to have higher scores. It’s totally obvious to me, however, why the opposite is true on a Data Science project. Students who pick their own datasets have fully invested themselves in this process of “learning from data.” The minimum standards become something of an afterthought as students ask and answer their own questions simply because they want to.
The majority of students who really struggle in my class are those who have a hard time sustaining effort through longer periods of work time. Add a screen to that mix, and there are some students who seem to actively avoid productive work, even though all of the tools are right in front of them. Some call it an issue of motivation. I’m less inclined to accuse these students of poor motivation, as I believe that I’m just observing students who’ve never had the opportunity to learn for the sake of learning before. The same students who leapt at the chance to take a class without any tests or quizzes discover (the hard way) that an open-ended project requires sustained interest, planning, executive functioning, and many other skills that seem to be deteriorating in the current attentional landscape.
When I notice a student struggling to sustain attention during project work time (after all, it’s pretty obvious when they’re working hard versus trying to get around the school’s social media site blocker), I tend to redouble my efforts to have them select a dataset that they actually find interesting to learn from. I can honestly tell them, “You don’t even need to focus on the minimum project standards - just answer questions that you find interesting about data you find interesting.” The great part is when the students meet the minimum project standards anyway. Or, if they don’t, when they receive feedback on something that they find interesting and get the opportunity to revisit it later. Through that, my hope is that they can learn better management and executive functioning skills while also putting their data science skills on display.
I should be clear when I say that this is very hard. I’m writing this in May, with my seniors less than a month from graduation. There are a few kids I’m still working (read: fighting) with to get their final projects off the ground. Each year, I try to grow my own skills and keep my own motivation level high so that more kids will walk away from this course having had a great experience. I’m certainly not there yet, but I think the ingredients are there.
What Teachers Get Wrong
The prospect of a classroom where every kid is looking at a different thing is scary. There is a certain comfort in knowing that students are all being assessed on the same skills at the same time, and that the time for assessment is contained into a nice 60-ish minute period. In math, we are really spoiled when it comes to assessment. With the right moves, we can make a huge, summative, multi-topic assessment gradable in just a couple of hours (sorry, humanities teachers!).
Project-based learning simply doesn’t allow for that. Kids are doing their own versions, iterations, and edits on questions that they find more interesting, which means that the teacher needs to find them interesting too.
I think that teachers get this really wrong, though. I’m never happier in my classroom than when I call up students, one-by-one (much to their chagrin) and simply ask them what they’re working on. I position myself as their partner in each of their projects, and I give them great transparency in how I will be evaluating them later, even going so far as to tell them explicitly, “I want you to accomplish X and Y to get full credit on this part of the rubric.” I can even get a good sense of where their grade is heading, and either (a) steer them in a different, more fruitful direction that meets the project criteria or (b) have a solid idea of how they’ve tackled the project criteria even before the project is complete.
What I’ve discovered in project-based assessment is that kids typically know near-exactly what grade they’re going to get (a good rubric will do that); kids pay more attention to the feedback they get than the grade they get; and grading always takes less time and energy than I thought it would.
Grading a project is less, “What is this kid trying to do here?” and more, “Oh, this kid was trying to get X; let’s see if and how they did it.”
As kids continually revisit the same datasets and improve their skills, the cycles of feedback allow for some of the smoothest grading I’ve ever done. Sometimes, I even enjoy it! Sometimes.
I urge teachers to be unafraid of project-based assessment, especially in courses that are much better served by authentic experiences, like computer science and statistics. Every now and then, I get the urge to write a quick multiple choice quiz on some important data science topic, and while those assessments certainly have their place, I am glad that I haven’t succumbed to that pressure yet.