Literature Evaluation via Online Group Quiz                                    Sharon K. Park, Pharm.D., BCPS                                                          Notre Dame of Maryland University

Courses designed to teach how to appropriately evaluate biomedical literature are absolutely necessary in pharmacy education. However, one of the most challenging parts of teaching it in a large classroom setting and assessing the skills in literature evaluation is that the instructors have a hard time determining how much the students actually learned vs. they seem to know. The true level of students’ knowledge and skills does not become apparent until they are on their clinical rotations, attempting to do a journal club alone for the first time.
As with any other educational movements, an effective method of teaching this skill set is increasing the amount of time that the students actively verbalize their thought processes. However, unless the course is tied to a skills lab, it is time-consuming and resource-intensive to exercise journal-club-like activity in the didactic setting. One way to solve this problem is to have the students talk to one another and share their knowledge, and do this against time and in a competition with other students, in a small group.
In a literature evaluation course with about 54 students, 13 groups were formed, based on the students’ preference. Each week for 5 consecutive weeks, the students were assigned a clinical trial to read and prepare for a group quiz which consisted of 10 multiple-choice, 4-option based questions using either Quizizz (quizizz.com). There are a few free online-based quiz platforms available including Kahoot! (getkahoot.com), Quizlet Live (quizlet.com/live), and Quizalize (quizalize.com), just to name a few. A few reviews comparing these platforms exist on the Internet and each has its own advantageous and unique features. Instructors now have a variety of platforms to choose for a small survey or a full-scale assessment on these sites. An example screenshot of one of the quizzes is shown here via Quizizz.
tipel december.png
Student groups were asked to sit as a group and register with the quiz number to begin the quiz on one student’s computer. As they enter their answers, the Quizizz platform registers the progress including correct and incorrect response, ranking, and speed. The allowed time can be controlled for each question based on the difficulty. However, the groups are ranked also based on the time spent on answering the questions, and a majority of them spent less than 10 minutes to answer 10 questions, with the study articles available to them during the quiz.
Based on a short survey to determine if the students’ perceived the group quiz was more helpful at improving their skills and if the group quiz should continue to be offered, an overwhelming majority agreed that the activity was helpful and should continue in the future. 
When using these online platforms for either a group or individual assessment, it is important to ensure that the questions are simple enough to be answered with a reasonable amount of time. In addition, the amount of time given to each group and each quiz should be consistent to deliver a fair assessment; some of the platforms (including Quizizz) do not limit or cut off the activity when the allotted time is gone. The number of students in each group should also be no more than three in order to increase meaningful participation among all group members.

Assessment Related Tools and Technologies Webinar

As educators, we must always remember the objective of education is learning, not teaching, and that teaching and learning are not synonymous.  Simply because the information was taught does not guarantee that the students learned or what they learned.  So, how do we ensure that happened???

To ensure learning occurred, defining learning outcomes for what students are intended to learn is the first step. Learning outcomes are not simply a list of the topics to be covered in the course. They are statements of measurable and observable knowledge, attitudes, and skills learners will achieve. At each institution, learning outcomes are defined for the entire curriculum, for a course, and for each lecture or class session. These learning outcomes are followed by assessment of learning. There is no true way to know if students achieved the intended learning outcomes without assessment.

ACPE recognizes this in the Standards 2016 with an entire section devoted to assessment and a standard pertaining to assessment of educational outcomes. In Standard 24, ACPE requires implementation of a plan to assess achievement of educational outcomes through systematic, valid, and reliable knowledge-based and performance-based formative and summative assessments. This is the assessment of the measurable and observable learning objectives for the curriculum, course, and lecture. This is not course grades. ACPE specifies the inclusion of outcome data from assessments summarizing overall achievement of learning objectives by individual students and the aggregate of students, along with tools utilized to capture students’ reflections on personal/professional growth and development. Luckily, a growing number of educational technologies make this task easier.

E-Testing

Wallace Marsh, MBA, PhD

Electronic testing has several advantages over paper and pencil exams.  First, it allows an instructor to ‘assess’ the assessment to determine the overall reliability and validity of the examination and of the individual questions within the examination.  Second, where tagging of the questions to content is permissible, it allows for the assessment of programmatic, curricular, and individual course objectives and the partial mapping of a curriculum.  There are many electronic testing products such as Examsoft®, Questionmark Perception®, and an assortment of exam/quiz features within a learning management system.  Two of the main challenges with tagging question content is the participation of the faculty to do it and the reliability of what is tagged.  Options to address these issues include faculty training, periodic reminders to tag content, providing a guidebook to tagging the content and finally instituting a ‘coding’ check reliability process.

E-Rubrics

Margarita DiVall, PharmD, MEd

Rubrics improve validity and reliability of student ratings during performance-based assessments or evaluation of writing and project work. Best practices for rubric development should be followed to ensure evaluation criteria are tied to learning outcomes for a given activity. E-rubrics enable generation of not only grades but achievement of specific competencies, and allow faculty and students to easily identify areas of strength and opportunity for improvement. Multiple platforms exist for e-rubrics including learning management systems (e.g. Blackboard), experiential management systems (e.g. E-Value), and ExamSoft. When selecting a specific platform, consideration should be given regarding ability to map rubrics to learning outcomes of interest, reporting options, ability to aggregate data across multiple assignments in a course and across many courses in the curriculum. Additional considerations should include grading options such as multiple graders, self-assessment, peer assessment, etc.

An example of e-rubric implementation in a skills lab course at Northeastern University using ExamSoft platform was discussed. The importance of validating mapping, using assessment data to improve curriculum, and closing the assessment loop was emphasized.

Audience Response Software (i.e. clickers)

All presenters

Audience Response software allow faculty to poll students during synchronous or asynchronous instruction and can be used as a formative assessment and active learning strategy. Literature has shown that polling technologies improve student engagement, motivation, and learning outcomes. Various software are available and offer web and cloud-based interface for instructors and students. When selecting a specific product, consider ease of use, cost, types of questions available, and integration with learning management systems. Presenters discussed their experience with several polling software products. 

To watch the recorded webinar, follow this link!

 

Creating an Active Learning Space

Elaine L. Demps, PhD.                                                      

Director of Instructional Design and Support Services

Texas A&M University Irma Lerma Rangel College of Pharmacy  

The Texas A&M University Irma Lerma Rangel College of Pharmacy is 10 years young. Even so, from early on, we began yearning for a space that was more conducive to teaching small groups. That space would nicely complement our three large lecture halls and would be used for our weekly integrated pharmacotherapy rounds and recitations. This desire became a reality in 2013 for our second campus in College Station where we planned to admit a first cohort of 30+ students in fall 2014.

We began the needs assessment by asking the faculty and students who were experiencing the integrated pharmacotherapy rounds and recitations in a large lecture hall at our first campus in Kingsville: What works? What doesn’t work? We also looked for exemplars and came across Indiana University’s collaborative learning studio, the SCALE-UP Project, and TILE. We even stumbled upon a then newly published special issue of New Directions for Teaching and Learning that was dedicated to active learning spaces.1 And we came across furniture makers who specialized in products that facilitated collaboration, e.g., Steelcase, Spectrum, and Teknion.

The pictures below show the before and after of the renovation. Each small-group station is a D-shaped table that seats seven students. We installed two 32” displays at the end of the table, four HDMI inputs for laptops, eight power plug outlets, seven data jacks, one camera, one gooseneck microphone controlled by four push-to-talk buttons, one ceiling speaker above each station, and seven headphone jacks with individual volume controls. Because our curriculum would be delivered to both campuses synchronously, each station had to be integrated into the videoconferencing system. But, by installing an audiovisual bridge, we planned to use the same equipment, e.g., the videoconference camera and microphone at a station, for web conferencing. That way, clinical faculty located anywhere in Texas could join a small group using Zoom or WebEx and facilitate that group’s work while the entire class was connected to our first campus by the Cisco TelePresence videoconferencing system. By the way, the headphone jacks were installed for that reason—to prevent cross talk when a small group was communicating with a distance facilitator.

The empty space in the center of the room was intentional. We needed this classroom to be dual purpose: to serve as a lecture-style room and as an active learning space. To serve that need, we purchased chairs equipped with work surface tablets that could be moved to the back when the chairs were placed at the stations or brought to the front when the chairs were placed in the center of the room.

We completed the renovation in fall 2015. During the 2015-16 academic year, we used it as an active learning space only. This year, now with three cohorts at our second campus, the room is also being used for lectures. So, what has been the feedback? As an active learning space, it indeed fulfills all the items on our initial wish list. The key point I took away from that special journal issue on active learning spaces was that spaces don’t dictate behavior but do influence it. I think that’s true. This room is indeed conducive for small-group teaching and learning. However, when configured for lectures, the room is less optimal. The primary reason seems to be because the equipment and electronics are ideal for small groups located at the stations. For example, for safety reasons, we chose not to install power outlets into floor cores in the middle of the room and now students don’t have a way to charge their laptops when the room is configured in lecture-style mode. Also, using the ceiling microphones, rather than the push-to-talk microphones at the stations, yields less effective communication.

We currently don’t have an active learning space at our first campus. I hope we get to add one. That would be awesome.

Reference

1. Baepler P, Brooks DC, Walker JD, editors. Active learning spaces. New Directions for Teaching and Learning. 2014 Spring; 137:1-98. doi: 10.1002/tl.20080.

 Before

pic2.jpg

After

pic 4.jpg

An electronic immediate feedback assessment technique for use in Team-Based Learning

Gary D. Theilman, Pharm.D.                                                                                                          University of Mississippi School of Pharmacy

An electronic immediate feedback assessment technique for use in Team-based Learning

A “Preventive Medicine and Public Health” course in the spring of the third-professional year has been taught for the past 5 years using the Team-based Learning process.

An overview of the Team-based learning process.   The project was developed for use during the “Group Test” portion.           Source:  http://www.med.wright.edu/aa/facdev/tbl/Introduction

An overview of the Team-based learning process.   The project was developed for use during the “Group Test” portion.          

Source:  http://www.med.wright.edu/aa/facdev/tbl/Introduction

The students begin the session by completing a short (10 question) multiple-choice Readiness Assurance test on the pre-class reading.  Immediately after taking the Individual Test, students break up into teams and re-take the same quiz as the Group Test.  While they are still not allowed to use books or notes, they may now work together to come to a “consensus” answer to each question.

Traditionally, the students have used Immediate Feedback Assessment Technique (IF-AT) scratch-off cards during the Group Test.  These cards are similar to instant lottery tickets.  The test has been set up so that the correct answer for each question corresponds to the location of a printed star that is under one of the scratch-off squares.

  

An IF-AT card showing an incorrect guess for question 2.   If the star is not under the first box scratched off, the team makes a second choice.    They continue to make guesses until they discover the correct answer.   Each incorrect choice costs them points. Naturally, the test must be designed so that the correct answer corresponds to the pre-printed location of the star. Vendors provide different versions of the scratch-off cards so that students can’t just memorize the correct answers from previous weeks. http://www.epsteineducation.com/home/about/

An IF-AT card showing an incorrect guess for question 2.   If the star is not under the first box scratched off, the team makes a second choice.    They continue to make guesses until they discover the correct answer.   Each incorrect choice costs them points. Naturally, the test must be designed so that the correct answer corresponds to the pre-printed location of the star. Vendors provide different versions of the scratch-off cards so that students can’t just memorize the correct answers from previous weeks. http://www.epsteineducation.com/home/about/

By the end of the test, the students have found the correct answers for all the questions.  The instructor collects the cards and assigns a test score based on how many boxes were scratched-off.

We had used the IF-AT cards for several years.  However, we had reached a point where we had run out and would need to obtain more.

At that time, the minimum order for the cards is a pack of 500 (about $85).     These 500 cards are all identical and the stars are in the same locations on each card.   To prevent the students from memorizing the location of the stars from week-to-week, a different card layout must be used each week.   This may necessitate purchasing 4-5 different packs of 500 cards and then rotating which card set is used each week.     Our class had only 10 teams.   For the 9 weeks we had the team tests, we would be using 90 cards.   However, we may need to purchase 2000-2500 cards in order to have 4-5 different layouts available.

Another problem is trying to force the test choices to conform to the pre-printed locations of the stars on the card.   For example, in a question which has a logical order of item choices like

What is the usual value of a year of life used for insurance company estimates?

A.  $25,000

B.  $50,000

C.  $75,000

D.  $100,000

One must either find an item number on the card where the pre-printed answer is “B” or must rearrange the choices to match where the star is printed.  A similar situation occurs with questions where the correct answer is

D.  All of the above

One must find an item on the card where the star is under “D”.   If there are no more “D is correct” items on the preprinted card, the question must be modified.

The TBL model calls for the instructor to assess which questions were giving the class trouble and spend a few minutes reviewing the concepts associated with the question (the Instructor Feedback stage).   The simplest solution would be to collect the cards from the teams and glance over them quickly to see which questions had multiple scratches.   This is not difficult with small classes, but one year we had a class with 42 teams.   “Glancing quickly” through 42 cards to try to determine which questions gave students problems takes a bit more effort.

The objectives of this project were to develop a process that would

  • Allow the instructor to design the test without having to force the answers to conform with the pre-printed IF-AT cards.
  • Avoid the need to purchase hundreds of cards that might never be used.
  • Better assess the level of student difficulty with each question in order to tailor the “Instructor Feedback” portion to better meet student needs.

A web-based application was created that allowed students to electronically ‘scratch-off’ choices in the same way they would with a physical IF-AT card.   

Before the class, the instructor would create a test using whatever questions and whatever item order was desired.   The correct answers to each question were entered into an online database.

 

 

 

 

 

 

 

 

Another webpage was written to generate slips of paper with a shortened URL and a QR code.   These slips of paper were distributed to the student teams at the beginning of the team test.

The students could either use the shortened URL or could scan the QR code to open a webpage that displayed the scratch-off card.

 

The page was designed to display correctly on laptop screens, tablets or smartphones.   Tapping (or clicking) on a choice would display an animation of the choice “turning over” to reveal either an “X” or a “Checkmark” on the other side.  Sliding (or clicking) the edge of screen would move from question-to-question.

 

Students discussing which answer to choose during the “team test”. The team-based learning model lends itself well to the use of lecture halls.   Students are assigned to sit with their teammates.   During the team portions of the class, the students merely turn in their seats to interact with each other.   The animation of the choice “turning over” was set to be slow enough that there is a bit of suspense as the student waited to see if they got a “check” or an “X”.

Students discussing which answer to choose during the “team test”. The team-based learning model lends itself well to the use of lecture halls.   Students are assigned to sit with their teammates.   During the team portions of the class, the students merely turn in their seats to interact with each other.  

The animation of the choice “turning over” was set to be slow enough that there is a bit of suspense as the student waited to see if they got a “check” or an “X”.

Another team is accessing the scratch-off card using a cell phone.  The quiz questions are on the pink pages.

Another team is accessing the scratch-off card using a cell phone.  The quiz questions are on the pink pages.