Maritime Jobs
Thursday, March 28, 2024

Training Tips for Ships: Tip #5

October 18, 2019

Credits: Photo by J. Kelly Brito on Unsplash

Credits: Photo by J. Kelly Brito on Unsplash

 – The Simple Secret to Making Randomized Exams Fair  – 

In last month’s Training Tips for Ships, we made the important point that we must never give different people the same exam. If we use an exam over and over, our trainees will very quickly learn what questions are on the exam and share the answers with their friends. Suddenly exam scores begin going up, and time spent learning goes down – both for the wrong reasons. There are few better ways to destroy a training program.

The remedy is to always provide different exams. This would be a lot of work, were it not for the exam randomization function found in most web-based Learning Management Systems (LMSs). Randomization in an LMS ensures that no two people get the same exam. But as we wrote last month, this may raise the concern that we lose consistency in our assessment practices. After all, it is critical that we measure everyone by the same standard. Changing the exams creates inconsistency, correct? Well, no – not if it is done correctly. Here’s how.
Building Consistent Randomized Exams

The two primary exam variables that we want to deliver consistently are 1) the difficulty of the questions, and 2) the competencies or knowledge covered in the questions. Thus, even when exams differ, we want to ensure that they all cover the same material to the same degree, at the same level of difficulty.

In an LMS, all questions are organized into “question categories”.  The key to ensuring consistency across these two variables in your exams is careful organization of your question categories. In your LMS, each question category should be set up to contain a pool of questions which each cover the same competency at the same level of difficulty. Once we have this set up, the exam can be defined to always select a specific number of questions from each category to make all instances of the exam consistent.

A Simple Example

Let’s consider a PPE exam. For simplicity, let’s say the exam covers three competencies or knowledge: the protective equipment needed for firefighting, the equipment needed for welding, and knowledge of  how to don basic protective equipment (clearly this is an incomplete exam, but is useful to illustrate). Let’s also say that we want to ask questions at two levels of difficulty to separate those who understand at a basic level vs. those who have mastery of the subjects.

Per our rules above, we will now create six question categories – each covering one competency at one level of knowledge. Thus we will have one category with questions all covering firefighting PPE at a basic level, one covering the same PPE at an advanced level, one covering welding PPE at a basic level, one at an advanced level, and so on.

Now, we configure the LMS so that each time it generates an exam, it draws (for example) 5 welding PPE questions at a basic level, 3 at an advanced level, 4 firefighting PPE questions at a basic level, and so on – depending on the emphasis we want to place on the exam topics and how difficult we want the exam to be. From this point on, each time the LMS delivers our PPE exam, it will contain a different set of questions, but will always have the same number of questions for each tested competency and be of roughly the same overall difficulty. Now we have randomized exams to help solve the cheating problem but have not sacrificed fair and consistent assessment.

Murray Goldberg is CEO of Marine Learning Systems which provides software and services to optimize knowledge, skills and behavior in maritime operators. In his former life he was a computer science faculty member at the University of BC researching online learning and assessment delivery models and their effectiveness. This led to him develop WebCT, a learning management system that was used by 14 million students in 80 countries. He contributes a regular column for the pages of Maritime Reporter & Engineering News. Contact him by email at Murray@MarineLS.com.

MarTID 2019 Report
The 2nd Annual report from the MarTID survey initiative to study global maritime training practices, investment and thought is now available. The surveys draw on insights from shipowners and operators, maritime education and training institutions and seafarers.  The insightful report is available free for global distribution @ http://digitalmagazines.marinelink.com/NWM/Others/MarTID2019a/html5forpc.html

Marine Learning SystemsMurray Goldbergprotective equipment

Featured Jobs

Captain

Bremerton, WA, United States

Assistant Manager - Vessel Crewing

Des Allemands, Louisiana, United States

OPEN VACANCY FOR ABLE SEAMAN ON AHTS

Crewing agency "Saga"

Assistant Operations Manager - WAF

Atlantic Container Line

Damage Control Leader

Military Sealift CommandNorfolk, VA, USA

Featured Employers

Military Sealift Command

The Military Sealift Command (MSC) is a United States Navy organization that controls most of the re

SC Shipping Singapore Pte Ltd

SC Shipping (Shanghai), Co., Ltd established in 1994.Company in its 30 years of development, current

GOLDEN GATE BRIDGE, HIGHWAY AND TRANSPORTATION DISTRICT

Based in San Francisco, the District consists of three operating divisions including Bridge, Bus, an

Crewing agency "Saga"

We have more than 32 years of experience, starting from 1991, and have a professional, goal-oriented

Atlantic Container Line

ACL is a specialized transatlantic carrier of containers, project and oversized cargo, heavy equipme