Academic Exchange Quarterly Fall 2010 ISSN 1096-1453 Volume 14, Issue 3
To cite, use print source rather than this on-line version which may not reflect print copy
format requirements or text lay-out and pagination.
This article should not be reprinted
for inclusion in any publication for sale without author's explicit
permission. Anyone may view, reproduce or store copy of this article for
personal, non-commercial use as allowed by the "Fair Use"
limitations (sections 107 and 108) of the U.S. Copyright law. For any other
use and for reprints, contact article's author(s) who may impose usage fee..
Online Learners, Choices, and Assignments
Lynn Akin, Texas Woman’s University
Akin, Ph.D., is an associate professor in the
This research examines the nature of choices within an online learning environment. Students were provided an array of ungraded assignment selections at regular intervals. Research questions include what choices would be most selected, which choices would yield better quiz grades, and whether students appreciated having choices. Student grades, choice data, and student survey responses provide the data. Three side elements of interest include how students responded to the anonymous element, the ungraded factor, and the experience of the course graduate teaching assistant.
An online course thrives on interactivity. Numerous studies counsel teachers and instructors to build in effective communication strategies, effective discussion questions, and effective teacher engagement. The underlying messages seem to say that it is the unwise online teacher who ignores the student’s willingness to participate in their own online education. Another area, little examined, is the nature of truly letting the students organize their learning by giving them choices of assessment exercises. If in fact students should be participating in their own knowledge construction, then assignment choices would be an excellent way to promote self-direction.
Within this research, choices are defined as an array of assignments from which the student can select. Offering students choices respects the basic tenets of andragogy and constructivism (Fritz, 2007; Jona, 2000) in which students bring their own experience and preference to the assignment. Johnson and Aragon (2003) suggest increased individual control within the online environment, and choices provide a measure of selection. Carefully selected choices can capture different learning biases so that students have real options in structuring the work to their own preferences (Palloff and Pratt, 2007; Lamb, Johnson and Smith, 2008; Diaz and Cartnal 1999; Collier and Morse 2002). Engleman and Schmidt, in their 2007 research with graduate students and choices, found that 78% of the students surveyed would like to see choices incorporated in every lesson, 82% believed they challenged themselves with their choice selection, and 83% thought that their choice helped their performance.
This research explores the nature of choices in an online learning environment. For this study, assignment choices were offered to students enrolled in several sections of a graduate level online class. The research questions include:
1. What types of choices
would be most frequently selected by online learners?
2. Did certain choices lead to better quiz grades?
3. How would students feel about having choices?
Two sections of a graduate level online course, during the same term, with a total of 60 students, were selected to experience assignment choices, and said choices were clearly
indicated in the course syllabus. In a choice module, three types of assignments were offered, all ungraded, and students could select the one they preferred. Initially, the choices were in modules 3, 6, and 9. However, due to some confusion with technology, online learning, and comfort with the online courseware, results from module 3 are not included in this research.
A choice module. A choice module offered each student the opportunity to select the ungraded learning activity they wished to participate in out of three possible suggestions, and different modules would offer different choices. Choices included interviews, search exercises, scholarly article analyses, and worksheet exercises. Each activity was selected in order to support the objective for the module but some choices were better than others because they were either more reflective of what the student had just learned or more predictive of the upcoming quiz. It was left up to the student to make that distinction although they knew what the upcoming quiz would cover and such information should have helped them make good choices. Discussion forums were set up and labeled for choices 1, 2, and 3, which occurred in modules three, six, and nine.
Random numbers were assigned to each student by the Graduate Teaching Assistant (GTA) and known only by the individual student and the GTA. After a student had made a choice and produced the work, he turned it into the GTA, who then removed all identifying student information, and inserted the randomly assigned student number. The GTA posted the student work and all the instructor could see were the numbers and the assignment. Feedback from the instructor was posted directly to the forum and right to the number. Feedback was always provided before the next quiz so the students could be sure that their choice had been done correctly.
A quiz module. The module immediately following a choice module had a typical graded quiz. This quiz was anticipated, located on the course calendar and syllabus, and it was a standard assessment of what had been learned in the intervening modules.
A reflective module. The module following the quiz asked the students to reflect on the choice they had made, why they had made it, the efficacy of their selection, and how they felt about choice making.
During the two choice modules evaluated for this research, students were allowed to select whichever choice they preferred. There were indications regarding which choice might be most useful. These instructor ‘hints’ could be found in the quiz review forums where the class reviewed course material. Asterisks in Tables 1 and 2 marked the choice the instructor felt most useful to doing well on the quiz.
RQ 1: Choice Types
The types of choices varied, but all represented the major learning styles of the VARK model (Vark 2010; Vierheller 2005; Zapalska and Brozik 2006). The acronym VARK consists of visual learning, auditory, read/write, and kinesthetic. In this research, the search exercise was a hands-on activity that would satisfy needs of the kinesthetic learner. The interview, talking to a working professional engaged in an area of expertise, was directed at the visual and auditory learner. The scholarly article involved reading and synthesizing so this choice appealed to the read/write learner. In a later module, students could select a recall/precision exercise (kinesthetic), the article synthesis (read/write), or an actual narrowing search exercise, another hands-on or kinesthetic choice.
Insert Table 1.
Insert Table 2.
As the tables show, students tended to select hands-on assignments. The search exercise, the search narrowing assignment, and the recall/precision problem were all a ‘doing’ assignment, either performing, refining and evaluating an actual database search, narrowing a pre-existing search through strategic steps, or working a precision formula. This hands-on factor cannot be construed as a real indicator of student choice as students had knowledge about what an upcoming quiz would cover and certain choices were better than others. And in fact, in the student comments, it was specifically stated that the search exercise was picked in order to prepare for the quiz. On the other hand, online students maneuver in an abstract environment such that a concrete assignment may have been construed as a useful learning device. Andragogy would dictate the hands-on selection because adult learners prefer the problem-based approach. Constructivism also supports the hands-on choice as it allows the students to participate in creating their own learning.
The instructor was surprised at the number of scholarly article choices made, but in the student comments, some remarked that the scholarly article choice was a more convenient choice. Presumably they are remarking that a university library is well situated to provide a research based article on a relevant subject matter within a few minutes of an online search. Another interesting observation was the article chosen was frequently not a research article at all, but rather an essay or an opinion piece, despite the fact that a brief description of the elements of a research article were provided in the choice description.
In total, only four students selected the interview choice and 56 students avoided it. Based on the notion of online students being remote or isolated from their classmates, it might have been predicted that the interview would be a popular choice. Additionally a basic script was provided with instructions that following the script exactly would earn a grade of B. One would have to go beyond the outlines of the basic script in order to earn an A as a possible grade. Perhaps graduate school students would prefer the possibility of earning an A grade or perhaps the interview choice was not seen as relevant to adult learners and finally maybe effective database searching is such an intense hands-on activity that talking to someone about it seemed not helpful.
Anonymity and Grades
The fact that the students were not identified by their choices was an interesting, though peripheral part of this research. Students were instructed to make their choice, do the work, and then send it to the GTA, who removed their identifying information. Using the assigned random numbers, the GTA posted all products in the appropriate forum, under her name, and with their student number. In other words, a choice might read that it was from #88 - GTA. The original idea was that the instructor would have no idea who made which choices, so no favoritism would be possible regarding feedback or quiz grading, and there could be no snowball favoritism, as the instructor would not know who consistently did well regarding choices. However, what did emerge was student anxiety about their grade, even in an un-graded environment.
On occasion, the instructor was privately asked to review choices, before the student sent it to the GTA. These requests were returned with a remark that their work was to be anonymous, not first reviewed by the professor. The professor was sent finished choices, asking for a grade. The professor was sent private emails asking about how they did, despite the fact that EVERY choice received public feedback in the appropriate forum and in a timely fashion, before the quiz. Feedback ranged from ‘well done’ for an excellent product to ‘you might consider adding ‘X’ for a good product to ‘go back and re-read the problem and review what others have posted’ for work that was of poor quality.
The instructor went to great lengths explaining that this was a true, learning situation. Nobody knew whose work belonged to whom. Nobody knew who did well and who did poorly, not even the instructor. Even then, some students felt obliged to go into the forum, under what was their anonymous work, and claim it. While this was not the prime purpose of this research, it was still an interesting exercise to watch students cling to wanting a grade in a learning exercise where a grade was useless.
To most students, particularly those new to the program or taking their first online course, the graduate teaching assistant (GTA) is usually assumed to be a “neutral” party and “helpmate” that has all of the answers. Given this status, the GTA received myriad questions about the choice exercises. It appears that several students could not believe they were actually being given an “option” of selecting an exercise, so many of the first round of questions sought reassurances that they were correctly interpreting the instructions to mean that they were being offered a “pre-quiz” exercise complete with instructor feedback. In the research of Engleman and Schmidt (2007), students asked similar questions regarding choices, and the researchers noted that it seemed that some students thought there was a “secret instructor-preferred group of activities” and that careful student questioning would ferret this out.
The second set of queries concerned anonymity. While the instructor went to great lengths to assure students of anonymity using the random assignment of numbers known only to the GTA, many appeared to still have had reservations about their status and identification numbers. Another issue closely related to the numbers queries was the frequency with which the students forgot their numbers, or worse, used the incorrect number on their submissions to the GTA. The GTA kept records of each submission by participant name and number, and always checked the names and numbers before posting. This system made it easy to immediately determine when an incorrect number was used by a student. After changing the identification numbers, the submissions were posted to the forum with accurate information.
As was the case with the instructor, the third most frequently recurring incident was the number of requests for the GTA to review the work before it was posted to the blackboard forum. These students were reminded that the purpose of the exercises was to help students master the material, and while the GTA did have the role of “helpmate”, reviewing their work prior to submission was not included for these choice exercises.
Finally, there were a few students that, after receiving feedback from the instructor, posted the revisions directly to the blackboard forum under their identification number instead of resubmitting the corrected work to the GTA for posting under her name. Because these students posted directly from their personal email accounts under their identification numbers, their names were linked directly to their numbers, thereby revealing their identity. While the GTA immediately removed these revisions and reposted under her name and the student identification number, there remained the possibility that these students’ identities might have been compromised.
RQ2: Relationship of Choice to Quiz Performance
Because students did have knowledge about the content of upcoming quizzes, did their choices maximize their ability to score well on a quiz? According to the results, yes. The quiz was worth 20 points total and the following tables show each choice and the average quiz grade. An asterisk by a choice indicates what the instructor thought was the best selection if a student wanted to perform well on the quiz. In both cases, the instructor’s best choice did yield high quiz grades. While it could be argued that the hints about what the quiz would cover might dictate one choice over another, it would be impossible for any ethical instructor to not provide guidance for what a student should study in order to do well on a test. It should also be noted that graduate students have to maintain a B or higher average so probably would be careful attention to instructor guidance on what to study. Another possibility is that an array of equally
quiz-worthy selections would also tell something about the making of a choice. Students uniformly did well on the quiz with all grades being higher than Cs.
Insert Table 3.
Insert Table 4.
RQ 3: Student Perspectives
After each choice module, students were asked if they liked having choices and the response range was strongly agree, agree, neutral, disagree, and strongly disagree.
Students responded overwhelmingly positive. Approximately 97% of section 1 and 99.9% of section 2 either strongly agreed or agreed that they liked having choices. Not one student selected disagree or strongly disagree regarding the element of having choices.
Research Questions Summary
The first research question asked what types of choices would be selected most often. Several statements can be made. Students tended to select hands-on choices, and students pragmatically tended to select that which they thought would lead to a good quiz grade. The numbers show that the two assignments most often selected were the same ones that the students had an idea would be on the quiz. Interestingly, both of these choices tended to be hands-on, concrete work. It would be interesting to offer as a choice something like a scholarly article with the ‘hint’ that the quiz would cover more scholarly issues. Would students still pick the hands-on work? Another observation is that students did not pick the interview with a professional, with only 4 out of 60 students making that selection. Those same 4 students also received the lowest grades on the first quiz. It would seem that online students far from campus might select the one-on-one intimacy of an interview. But clearly they avoided that completely. Certainly other factors, such as student competence, would come into play but it would be interesting to know why the interview was the lowest overall choice.
Quiz grade variances followed either the best pick scenario or the hands-on scenario. The students who made a choice based on what the quiz covered, tended to get slightly higher grades for the first quiz and that best choice was also a hands-on project. However, with the second quiz, the best choice and another choice tied for highest quiz grades. What those two choices had in common is that they were both hands-on exercises. It is difficult to know, based on this exploratory research, whether hints about quiz coverage or actually doing a hands-on exercise leads to better grades in a course that covers an area of professional expertise that relies on a skill set.
Finally, students like having choices. Anecdotally, they remarked on class forums that they appreciated being able to have a say in their assignments. Later classes also were allowed to select a due date as well. In that case, there would be three choices due at any time during three modules. Needless to say, students liked the due date choice as well. They remarked that they could create a due-date calendar taking into account all of their assignments due during the entire term for all of their classes and then insert the choice assignments into academic light weeks.
The Instructor and Choices
Building choices may not be the best way to start for a beginning online instructor. It takes a great deal of management skill and experience merely to keep up with the day-to-day demands of an online course. But, if the instructor has online experience, has taught the class several times already, and if all assignments have been used in the past successfully, then the first step would be to offer the choice of assignments but with a set due date. That has the instructor grading a mix of choices, but all are due on the same date. The next step up in complexity would be having the assignment choices due in a variety of modules. This has the instructor grading a mix of assignments on a variety of due dates. The pedagogical issue is to make sure all choices support the content and the due dates reflect and respect the learning cycle.
The instructor also has the opportunity to evaluate his or her grading style. When 30 students turn in the same assignment on the same day, there is a comparison factor that feeds into the act of grading. When 10 students turn in 4 different assignments, on 4 different dates, the instructor has an ideal chance to impartially evaluate his or her grading skills.
There is increased responsibility on the part of the instructor to facilitate choices in assignments. Gautreau, Street, and Glaeser (2008) found that 50 percent of surveyed faculty members agreed that providing learning choices increased their teaching workload. While logistical issues are involved, faculty also noted that it was a worthwhile time investment based on student outcomes.
A final element of interest regards instructional design and whether course management systems encourage a linear module-by-module ‘fill in the blanks’ mindset. This type of electronic structure might discourage authentic activities (Herrington, Reeves, and Oliver, 2005) or might tend to suppress teaching ways that do not fit into the blanks. Jona (2000, 4) makes the observation that “driven by the computers ability to automatically score multiple choice tests” online learning “is awash in tests, quizzes, and other assessments” rather than one where “learner choice” becomes the touchstone.
The statistics and the anecdotal reports support offering choices. Students enjoy it, it invigorates the course, it allows the instructor to reevaluate their grading and assignment types, and it takes advantage of the online medium by allowing students to share their choice and what they discovered. It is worth a try.
Collier, Catherine, and Frances Morse. 2002. Requiring independent learners to
collaborate: Redesign of an online course. Journal of Interactive Online
Learning, 1:1. Available at http://www.ncolr.org/jiol/issues/PDF/1.1.3.pdf
Diaz, David, and Ryan Cartnall. 1999. Students’ learning styles in two classes. College
Teaching, 47, no 4: 130-36.
Engleman, Melissa, and Mary Schmidt. 2007. Testing an experimental universally
designed learning unit in a graduate level online teacher education course. Journal of Online Learning and Technology, 3, 2. Retrieved from http://jolt.merlot.org/vol3no2/engleman.htm.
Fritz, Michele. 2007. Risk free trial: Front loading tactics to engage college students in
Online Teaching: Faculty Perspectives. International Journal of Instructional
Technology and Distance Learning, June, 2008. Retrieved from http://itdl.org/Journal/Jun_08/Jun_08.pdf#page=7
Herrington, J., T. Reeves,and R, Oliver. 2005. Online learning as information delivery:
Digital myopia. Journal of Interactive Learning Research. Retrieved from http://ro.uow.edu.au/edupapers/32/
Johnson, Scott, and Steven Aragon. 2003. An instructional strategy framework for online
learning environments. New Directions for Adult and Continuing Education, 100: 31-43.
Jona, Kemi. 2000. Rethinking the design of online courses. Managers and
Administrators. ASCILITE (Australasian Society for Computers in Learning in Tertiary Education). Available at http://www.kemijona.com/papers/Rethinking%20Online%20Courses.pdf
Lamb, Annette, Larry Johnson, and William Smith. 2008. The basics: Eight elements of
effective online courses. Teaching and learning at a distance. Retrieved from http://eduscapes.com/distance/the_basics/elements.htm
Palloff, Rena, and Keith Pratt. 2007. Building Online learning communities. NY:
VARK: A guide to learning styles. 2010. Available at
Vierheller, Tim. 2005.
VARK learning styles.
Available at http://www.authorstream.com/Presentation/aSGuest16320-172599-vark-learning-styles-education-ppt-powerpoint/ Powerpoint provided by Author Stream.
Campus-Wide Information Systems 23, 5: 325-335. Available at Emerald Management Xtra fulltext.