Research: Crowd-sourcing and crowd-vetting learning content

Our Chief Learning Officer, Dr. Irene T. Boland, was the primary author of this research paper.  Dr. Boland designed the research, informed and advised the software development, conducted the data collection, analyzed the results and presented the findings through research papers and presentations. Here is the content as published in the Proceedings of the 4th Annual GIFT Users Symposium (GIFTSym4)

Introduction

The US Army trains and educates over a half million individuals per year in a course-based, throughput-oriented system. Much of the Army’s web-based instruction is in the form of static PowerPoint presentations, with little tailoring to individual soldier needs. With the ever-changing landscape of full spectrum operations, today’s soldiers are facing ill-structured problems and have little time for the ideal levels of reflection and repetition needed to promote critical thinking, adaptability, and mastery of complex skills. Additionally, the current time frame for updating courses (3 to 5 years) does not support the modern Army’s fast-paced learning needs.


In pursuit of more powerful training tools, the US Army Research Laboratory (ARL) has sponsored research resulting in the Generalized Intelligent Framework for Tutoring (GIFT; Sottilare, Brawner, Goldberg & Holden, 2012; Sottilare, Holden, Goldberg & Brawner, 2013), an open source architecture to lower the skills and time needed to author, deliver, and evaluate adaptive instruction. To enhance the content authoring and management capabilities of GIFT and other instructional frameworks, ARL has sponsored research into a Social Media Framework (SMF) that enables organizations to crowd-source and crowd-vet new learning content and improvements to existing courses. The research questions we seek to answer in our current research include the extent to which the SMF and GIFT can: (a) promote critical thinking, collaboration, adaptability, effective communication, and problem solving; (b) help close the gap between formal training and operational application of the training to missions in the field; (c) reduce the time required to locate and use learning resources; (d) reduce the time required to incorporate feedback from the field into formal instruction; and (e) reduce instructor workload, while maximizing the efficacy of the instructor’s time.

Background: Social Media Framework

Previously, we investigated a research-based suite of affordances that support the sharing and vetting of information amongst peers. The objectives of the project were to: identify lessons learned from commercial, academic, and US Government applications of social media to knowledge management and learning; and consider the unique requirements and constraints of the military learning environment and how successful commercial and academic models for learning can be adapted to military applications.

Current Research

Research Objectives

At a high level, our research aims to investigate the extent to which SMF integrated with GIFT can do the following:

  • Promote critical thinking, collaboration, adaptability, effective communication, and problem solving within adaptive instruction
  • Help close the gap between formal training and operational application of the training to missions in the field
  • Reduce the time required to locate and use learning content and resources
  • Reduce the time required to incorporate feedback from the field into formal instruction
  • Reduce instructor workload, while maximizing the efficacy of the instructor’s time

Experimental Methodology

This research project has followed a sequence of overlapping/spiral events, including a literature review (ensuring that our proposed research furthers the body of knowledge), an experiential review (hands-on examination of existing tools to ensure that the affordances we test are extending the state of the art), test bed development (creating the suite of affordances to enable testing of our research hypotheses), and quantitative and qualitative research (testing our hypotheses and soliciting feedback from participants).

Test Bed Architecture

Prior to the creation of GIFT Cloud, we expanded the SMF to provide a cloud-based, “headless” instance of GIFT, allowing multiple users to connect to GIFT across the internet (Figure 1). In this configuration, we run server-only instances of GIFT, the Nuxeo content management system (CMS), and ActiveMQ, which allow us to provide an entire GIFT instance to multiple users, without the need for dedicated desktop systems.

GIFT was also extended to include a gateway interoperability module that allows connection to a web-based course player. The course player, suitable for expansion to mobile devices, plays course content that automatically generates experience application programming interface (xAPI) statements for tracking the learner’s interactions (Advanced Distributed Learning, 2013). The course content is stored in the Nuxeo CMS, which provides revision control mechanisms. A SMF-based front-end allows for simplified course creation and management, adding the ability to author an entire web-based course. Using Nuxeo in this way allows us to leverage the GIFT toolset, which also uses Nuxeo, to tie the two systems together, so that they can share learning assets and access controls. Through the gateway interoperability module, the course player communicates to the GIFT Engine for Management of Adaptive Pedagogy (eMAP), allowing adaptivity within the course driven by GIFT’s advanced adaptive capabilities. The web-based course player includes the ability for courses to collect social media feedback on granular aspects of the course (e.g., paragraphs of text, images, videos, etc.).

Using annotation-style commenting, the feedback is collected and stored within the SMF for crowd-comment and review after the course is completed. In addition, the GIFT user interface (UI) has been modified to allow other GIFT transitions (surveys, learning materials, after action reviews) to collect feedback in a similar manner. This feedback is also made available within the SMF for crowd comment and interaction.

Experimental Research

Our research in social media-enabled learning and knowledge management includes three major phases, each with a data collection. In 2015, Data Collection 1 focused on instructional systems designers (ISDs) and subject-matter experts (SMEs) using a learning content management system (LCMS) to enter content and build a course. Data Collection 2, conducted in summer 2015, involved learners taking the course and providing granular feedback about how they think the course can be improved as well as using social media tools to discuss the feedback of others. In Data Collection 3 (Spring/Summer 2016), the ISDs and SMEs will review the feedback from learners and decide what improvements they will make to the course. They will then be able to use the SMF to update and republish the course based on the learner feedback.

This three-part research demonstrates the speed with which experts in the field and fleet could provide real-world feedback that could then promptly incorporate changes into the official course by the schoolhouse. This addresses key goals within the Army Learning Model (ALM; TRADOC, 2011), which seeks, among many other goals, to include the ever-evolving knowledge from the field into official training as quickly as possible.

Data Collection 1 Procedure

At the time of this data collection, GIFT ran as a desktop application. Expanding on the existing SMF, a cloud-based, “headless” instance of ARL's GIFT platform was created, which allowed GIFT to run independently of a specific workstation. Utilizing this configuration, we deployed the GIFT Survey Authoring System (SAS) and GIFT Course Authoring Tools (CAT) through our Apache Tomcat web application server. Using nginx to serve the existing SMF and act as a proxy to the GIFT instance on the same server, gave the participants the experience of a seamless, consolidated system with Single Sign On (SSO) for each subsystem. The experimental test bed was hosted on a dedicated server off site from the research location. Each participant received login credentials and used a separate work station in their lab to access the test bed through the internet from a standard browser.

The researchers guided participants through standard tasks involved in creating learning content. The participants were encouraged to comment on the experience and compare and contrast it to the tools and processes that they typically use as ISDs and SMEs. The session was videotaped to allow for detailed analysis afterward. The researchers described the system to the participants as an experimental learning content authoring system for the Army and that the long-term goal was to grow the system into a powerful tool that is useful to them (and other users) in creating adaptive learning experiences that are easy to update. The researchers also noted that having their formative feedback at an early stage would help guide development in the direction that's most useful to users. Their data collection experience was designed to simulate a collaboration to create the course. So, each participant was asked to create a different scenario and then we had them work together to tie it all together into a complete course.

Data Collection 1 Results

Each of our recommendations has its basis in the time-tested and research-proven principles of UI and User Experience (UX) professions. Our recommendations are meant to help move GIFT closer to its goal of being useful to SMEs who want to author effective courses on their own. The Nielsen/Norman Group of UI/UX professionals defines useful as the result of utility and usability (Nielsen, 2016). Utility speaks to the extent that the system has the features the user wants and needs. Usability can be described as having 5 criteria: (1) easy to learn to use; (2) user can complete tasks quickly; (3) user can remember how to use it after being away from it for a while; (4) errors the user makes are few and easily rectified; and (5) the system is enjoyable to use.

Recommendation 1: Sell the utility, immediately.

Users found that the system contained a large number of steps compared to other systems they had used to build adaptive training or surveys. Some of those steps were unclear in meaning or purpose. The naming conventions used are not consistent with what SMEs would name the features, buttons, and other controls. As a result, they expended a great deal of mental effort (cognitive tolls) to work in the system. Although the researchers explained the long term purpose of the system (to creative adaptive training suited to each individual), the perceived benefits of the system were not sufficient to motivate the users to want to continue using the system in its current state. For all of these reasons, we recommend an early intervention of “selling the utility” – making the benefits of the system so clear that new users will be motivated to expend the needed effort to understand and master the system.

We recommend the system provide a short but impactful explainer video that helps users understand the system and what’s in it for them. Specific questions that should be answered include: (a) What is Adaptive Learning?; (b) Why should I use Adaptive Learning with my learners?; (c) What is GIFT? And, why is it better than my other options?; (d) How have others similar to me used it (compelling real success stories/visuals)?; and (e) How do I use GIFT to create Adaptive Learning?

The military has a long-standing tradition of rigorous ISD, which follows a standard ADDIE model (analysis, design, development, implementation, evaluation) of activities. We can reasonably expect a SME to have extensive knowledge of the content being taught. Based on their experience, they may also bring knowledge of the audience (having been a trainer) and the related organizational goals that lead to the SME being asked to share their knowledge. However, there are significant knowledge gaps in ISD for most SMEs. To achieve the long term goal of an independent SME creating effective training, the system must provide the education and support needed by the SME.

Recommendation 2: Use the process and vocabulary native to the SME.

The current process flow and vocabulary used in the system is not reflective of how most SMEs think or work. As a result, they are burning significant brain power simply trying to understand the system rather than feeling the reinforcement of accomplishing their goals. To illustrate both of these concepts, we examined a short process – adding a question to an assessment – comparing how SMEs typically do it with how SMEs attempt to do it in GIFT.

For this very short sub-process of the larger course creation process, we can compare the GIFT experience versus the typical SME experience using the scorecard shown in the Table below. 

Recommendation 3: Incorporate extensive yet lean, on-demand contextual support for SMEs.

We recommend two approaches to provide support. First, provide SMEs some fast and simple support when they first arrive. This help should display automatically the first time the user experiences a screen.

Afterward, it should be available for the user to display on demand. Second, offer mouseover-based help for each control, vocabulary term or other element that the SME might not be familiar with. The example in Figure 3 shows that a vocabulary improvement has been made – changing the word Transition to Content, and then providing a mouseover that explains what particular types of content are and alerting the user if they will need to use another part of the system to create that content before trying to use it here.

Data Collection 2 Procedure

For the second data collection, the SMF was expanded to include course topics and actual course materials, accessible from the “training” tab. Once launched, the course was played through the GIFT framework. In GIFT, a course is a series of transitions, which might include surveys, learning materials, and training applications. To enable a training application to play lessons comprised of web-based content, we implemented a new gateway interoperability module. Unlike standard web-based lessons, however, any element of the content can be selected and commented upon

Showing those comments in close proximity to the lesson content could negatively impact the flow of the course for future learners; so instead, the comments automatically appear as a new conversation thread under the feedback tab (Figure 5) of the surrounding topic page for this course. We added similar social media commenting capability to other GIFT transitions including surveys and learning materials.

 

The course material was developed by Vcom3D for specific use in the experimental research and reviewed by ISDs for relevance to the target student participants. The content was then prepared for playback with the web-based training application and other GIFT transitions. As part of course development, we created two paths through the course – one for novice learners and one for people more familiar with the material. Based on pre-test scores (using GIFT’s survey engine), the learners were presented with content matching their level of knowledge. This allowed us to make use of GIFT’s adaptivity in a simple way, but highlighting the potential of the tools. In addition, a pre-test survey was used to collect demographic data. This demographic data was used to present a different look and feel based on the learner’s branch of military service.

Data Collection 2 Results

The data collection involved 73 students taking an online course through our modified GIFT instance, providing granular feedback on the course content, and commenting on the feedback of other students through the SMF. During the data collection event, multiple sessions of approximately 20 student participants accessed the experimental test bed from work stations in their lab through the internet using a standard browser and credentials provided by the researchers. Participants were asked to navigate to a particular topic and take the course associated with that topic. Participants were encouraged to generate questions or feedback on any content they encounter. After completion of the course, participants reviewed their comments on the topic page and also saw the comments of other participants. They were able to up vote and down vote the questions, answers, and feedback generated by others as well as contribute to the discussions. Participants in subsequent sessions reviewed the accumulated contributions of all preceding participants. At the end of each session, the participants completed a survey to provide feedback on their experience.

During the sessions, we received hundreds of original and follow-on comments from participants. Analysis of results showed that learners do experience problems with learning content in general; they liked the personalized look of their course (based on their service branch); and they felt the challenge level of the adaptive content was appropriate. They found the ability to comment within the course and within the SMF to be intuitive and easy.

Data Collection 3 Procedure

The third phase of research will explore techniques and algorithms for analyzing the user-generated content, surfacing the most relevant comments and activity, and relaying them to the most relevant stakeholder. For this data collection with content authors and content owners, the user management section of the SMF will evolve to display a "User Digest" specific to each user and their role in the system. An Activity section will highlight the latest contributions by the user. Back-end data analytics will look at factors such as up votes, down votes, and general activity to prioritize the feedback most
relevant to this user. The goal is to highlight trending and actionable issues pertaining to course content owned by this user. Participants will then evaluate the efficacy of the system in surfacing errors,
identifying gaps, suggesting content, and reducing ISD work-load.
After reviewing student feedback, participants will then be encouraged to use the updated SMF-based tools to update and re-publish the course content, with a goal of determining the effectiveness of rapidly
turning learner feedback into actionable content updates and making those course improvements immediately available to new learners.

Conclusion and Recommendations for Future Research

At the end of the third phase of the current research, we will have investigated the efficacy of crowdsourced and crowd-vetted content for applying field knowledge to improve learning content, while reducing instructor workload and turn-around time. However, we believe that social media can provide additional benefits to the learning environment, and to GIFT in particular, by (1) harnessing crowd inputs for the creation and refinement of a domain model, or the body of knowledge for a topic, and (2) mining social media data to enhance an individual’s learner profile (or personal history of learning, demographics, and achievements). We have also identified the need to make the user experience more intuitive to its intended end-users (SMEs). At the end of the current research, we will make recommendations for these additional means for applying social media to the integrated learning environment. Additional areas of research that could be explored include: (1) harnessing crowd inputs into the creation and refinement of a domain model, or the body of knowledge for a topic; (2) mining social media data to enhance an individual’s learner profile (or personal history of learning, demographics, and achievements): and (3) developing the user experience to be immediately intuitive to its intended end-users (subject-matter experts in the field).

References

Advanced Distributed Learning. Experience API: Research Summary. Retrieved May 13, 2013 from http://www.adlnet.gov/tla/experience-api
Nielsen, J. Usability 101: Introduction to Usability. Retrieved March 15, 2016 from http://www.nngroup.com/articles/usability-101-introduction-to-usability/

Sottilare, R.A., Brawner, K.W., Goldberg, B.S. & Holden, H.K. (2012). The Generalized Intelligent Framework for Tutoring (GIFT). Concept paper released as part of GIFT software documentation. Orlando, FL: U.S. Army Research Laboratory – Human Research & Engineering Directorate (ARL-HRED). Retrieved from: https://gifttutoring.org/attachments/152/GIFTDescription_0.pdf

Sottilare, R., Holden, H., Goldberg, B., & Brawner, K. (2013). The Generalized Intelligent Framework for Tutoring (GIFT). In Best, C., Galanis, G., Kerry, J. and Sottilare, R. (Eds.) Fundamental Issues in Defense Simulation & Training. Ashgate Publishing.

US Army TRADOC (2011). The US Army Learning Concept for 2015. Retrieved November 15, 2012 from http://www.tradoc.army.mil/tpubs/pams/tp525-8-2.pdf.

About the Authors

Mr. Rodney Long is a science and technology manager at ARL, Human Research and Engineering Directorate, Advanced Training and Simulation Division in Orlando, FL. He has a wide range of simulation and training experience that spans 28 years in the Department of Defense (DOD) and has a Bachelor’s degree in computer engineering from the University of South Carolina and Master’s degree in industrial engineering from the
University of Central Florida

Dr. Irene Boland is the Chief Learning Officer of the Learning Development Institute, Inc. Previously, she was the Learning Development Director at Vcom3D. She has over 12 years of hands-on experience in applying emerging education science to real-world issues faced by enterprise organizations. She leverages her PhD in education in combination with proven social media technology to create innovative methods for accelerated learning and knowledge transfer. Her expertise enables clients to solve performance issues, maximize use of resources, and
improve profitability.

Mr. Doug Raum is a software developer at Vcom3D, an Orlando-based company that develops innovative mobile, Web, and game-based learning. He has over 19 years in the information technology field in a variety of positions, with 12 years of software development and a focus on enterprise web application development, as well as a further background in systems, security, and infrastructure design/administration.

Mr. Dan Silverglate is Director of Software and Graphics at Vcom3D. He has over 15 years of experience in digital media production and software development. At Vcom3D, he leads the development of serious games for education and training featuring lifelike virtual humans. He holds a Bachelor’s of Art in film studies from the University of Florida, where he graduated with high honors and was inducted in the Phi Beta Kappa Honorary Scholastic Society, and a Bachelor’s of Science in computer science from the University of Central Florida, where he graduated Magna cum Laude.

Dr. Ed Sims is Chief Technology Officer of Vcom3D. Since 1997, his work has focused on the application of web and game-based technology for simulation, training and education. Prior to co-founding Vcom3D, he was Technical Director for Lockheed Martin Information Systems Company. He holds a Bachelor’s degree in mathematics from the College of William and Mary and Master of Science and PhD degrees in systems engineering from Rensselaer Polytechnic Institute. He has been awarded five patents in the areas of real-time modeling and simulation of human
behavior.