The marathon geology class is over now, and I have a few observations about the Poll Everywhere experience. These things would have helped me had I known them in advance, so here they are in case someone else might benefit. Some of the points below are also applicable to classroom response systems in general.
Signing up the students
As I mentioned in a previous post, this went fairly smoothly. One reason is that the process is an easy one, but another reason is that there were enough devices for students to share in order to access the website for creating an account and registering. While students can use an “old-fashioned” cell phone without a browser to text their answers, they can’t use that device to get set up in the first place. I used my iPad to get two of the students started, and students shared their laptop computers with each other.
My class was small (33 students), so it was relatively easy to get everyone sorted. In a larger class this could be a challenge. I would probably have students sign up in advance of class, and then be willing to write off part of the first class for purposes of troubleshooting.
One thing I would do differently is to have students register as a voter regardless of whether they plan to use their browser to respond to questions or not. I told the students who would be texting that all they needed to do was have their phone numbers certified. This is true, and they appeared on my list of participants. The problem is that if a student uses their web browser but forgets to sign in, they show up on my list anonymously as an unregistered participant. More than one student did this, so it wasn’t possible to know which student entered which answers.
Students’ responses were automatically marked as correct or incorrect (assuming I remembered to indicate the correct answer). Reports with results can be generated and downloaded as a spreadsheet, although the amount of information in the spreadsheet made it difficult to get a quick sense of who was having difficulty with what kind of question. Deleting some columns helped.
Integrity of testing
At the outset I decided that it would be extremely inconvenient to have students put their notes away every time they had to respond to a testing question. My solution was to limit the time they had to respond to testing questions. I figured that if they didn’t know the answer, that would at least restrict how much they flipped through their notes. It also helps to ask questions where the answer isn’t something they can look up.
It turned out that 25 seconds was a good time limit, although they got longer than that because I took time to explain the question and the possible responses. (I wanted to make sure that incorrect answers reflected a gap in their knowledge rather than a misunderstanding of what the question was asking.)
There is a timer that can be set, but with the work-around I was using resulted in too many potential complications. Instead, I avoided the timer and either used the stopwatch on my smartphone or counted hippopotamuses.
Setting the correct answer to display
If you set a question to be graded, students can see whether or not they got the correct answer within the Poll Everywhere website. By default there is a one-day delay between when the question is asked and when the answer is shown (under “Settings” and “Response History”). I wanted the students to be able to review their answers on the same day if they were so inclined, so I set an option to allow the correct answer to be shown immediately. The problem, I later discovered, is that if one student responds and then checks the answer, the correct answer can get passed on to other students.
Ask a friend
Another issue with the integrity of testing done using Poll Everywhere—or any classroom response system, for that matter—is the extent to which students consult with each other prior to responding. I could have been particular on this point, and forbidden conversation, but I wasn’t keen on the task of policing. But as it turned out, conversing with one’s neighbour preclude both students getting the answer wrong. In a large class it would be impossible to control communications between students, which is one of the reasons why any testing done using this method should probably represent only a small part of the total grade.
Who sees what when
There are two ways to turn a poll on, and they each do different things. To receive responses, the poll has to be started. To allow students to respond using their browsers, the poll has to be “pushed” to the dedicated website. It is possible to do one of these things without doing the other, and both have to be done for things to work properly. The tricky part is keeping track of what is being shown and what is not. If a question is for testing purposes then you probably don’t want it to be displayed before you ask it in class.
When you create a poll, it is automatically started (i.e., responses will be accepted), but not pushed. Somewhere in the flurry of setting switches I think I must have pushed some polls I didn’t intend to. I also noticed one morning as I was setting up polls that someone (listed as unregistered) had responded to a question I had created shortly before. As far as I knew I hadn’t pushed the poll, so…? The only explanation I can think of is that someone was responding to a different poll and texted the wrong number. Anyway, as an extra precaution and also to catch any problems at the outset, I made the first question of the day a discussion question. Only one question shows at a time, so as long as the discussion question was up, none of the testing questions would be displayed.
One other thing to keep in mind is to check before asking a question that one hasn’t written the answer on the board. If the class suddenly goes very quiet and the responses come in as a flood, that’s probably what has happened.
Accommodating technology and life
Stuff happens. If a student misses class, he or she will also miss the questions and the points that could have been scored for answering them. If the absence is for an excusable reason (or even if it isn’t) a student might ask to make up the missed questions. As this would take the form of a one-on-one polling session, and the construction of a whole suite of new questions, I knew it was something I didn’t want to deal with.
One could simply not count the missed questions against the student’s grade, but that wasn’t a precedent I wanted to set either. Instead I stated in the syllabus that there would not be a make-up option, but that each student would have a 10-point “head start” for the Poll Everywhere questions. Whatever the student’s score at the end of the course, I added 10 points, up to a maximum of a 100% score. I had no idea how many questions I would be asking, so 10 points was just a guess, but it ended up covering the questions for one day’s absence, which is not unreasonable.
Another thing the 10 points was intended to do was offset any technological problems, like a student’s smart phone battery dying at an inopportune moment, or someone texting the wrong number by accident, or accidentally clicking the wrong box on the browser. The 10 points also covered miscalculations on my part, such as making a testing question too difficult.
I still ended up forgiving missed questions in two cases: one because of a scheduling conflict with another class, and the other on compassionate grounds.
I will be teaching in September, and I plan to use Poll Everywhere again. Even if it happens that my classroom is outfitted with a receiver for clickers, I’ll still stay with Poll Everywhere. For one, my questions are already set up, ready and waiting online. Another reason is the flexibility of being able to show a question without actually showing the poll (i.e., the window with questions and responses that the Poll Everywhere software creates). This started out as a “duct tape” fix for a technical problem, but in the end I think I prefer it because I have more control over what can fit on the slide. As far as I know, Turning Point questions (the usual clicker system) can’t be started unless the slide that will show the results is the current slide.
One more reason is that the system will be free for students to use, outside of whatever data charges they might incur. I will either cover the cost myself, or, if there is no Turning Point option, attempt to convince the school to do it. A plan exists where the students can pay to use the system, but I’d like to avoid that if possible. On the off chance that something goes horribly wrong and I can’t get it working again, I’d prefer to not have students wondering why they had to pay for a service that they can’t use.
Over all, I really like the idea of having a diagnostic tool for probing brains (also referred to as formative assessment, I think). I suppose my teaching process is similar to the one I use for debugging computer code: I perturb the system, observe the output, and use that to diagnose what the underlying problem might be. Poll Everywhere is not the only tool that can do this, but it is probably the one I will stick with.