Monday, January 14, 2013

Adventures in HPT: Project 1: Phase 1: Part 2 Data Collection

Adventures in HPT

In a previous post I introduced a current Front-End Analysis project I'm working on for The KeelWorks Foundation. Once our project alignment was complete, it was time to begin looking into the problem. Recall that our problem statement was that volunteers were not producing elearning courses, a critical success factor for the organization.

Our next task was to get a better understanding of the problem. To do this we created a Data Collection plan:

1) Interview current volunteers (As many as we need to until we've heard it all)
2) Interview Directors (To get an idea of potential issues from their perspective)
3) Survey current and former volunteers (To validate our hypotheses)

Constructing Hypotheses

We created a set of open-ended questions for our interviews with the intent that we would ask probing questions when necessary. We started with background-type questions to open communication. We asked questions like:

"Tell me about your background with Instructional Design."

"What volunteer work have you done in the past?"

The first question was also asked to get a better idea of their knowledge and skills. The second question was asked to get an idea of expectations as a volunteer organization. Was KeelWorks asking them to do too much? Too little? What were their other volunteer experiences like?

Next, we started asking about KeelWorks:

"What attracted you to KeelWorks?

"What were your expectations when you started volunteering with KeelWorks?"

We wanted to know why they were here, and what did they want to get out of this experience. Then we began asking about their idea of what success and failure would look like. We wanted to know what they would consider a successful experience to be.

We asked them about their experience with virtual work. We wanted to determine if working completely virtually was acting as a barrier to performance.

Finally, we asked what they were actually experiencing. We wanted to know what day-to-day life was like for them. Were their experiences meeting their idea of success? Were there any issues/problems? Was anything getting in their way, or making work difficult?

Over several interviews we began to get a more clear picture of what was happening. We then began to interview Directors (We did it this way because of availability, it would have been easier to start from the top). We wanted to know if they were aware of any performance issues. We took information from those interviews and compared it with what we learned from the volunteers (Data triangulation).

We then used Gilberts Behavioral Engineering Model as a frame, and our survey to test our hypotheses.


It's easy for managers to blame performance problems on their workers and claim they don't know what they are doing... and then request training to solve the problem. Thankfully, KeelWorks is not taking that approach - realizing performance problems are likely more systemic in nature. As you can see from our hypotheses, this problem may require a performance improvement solution rather than a performance improvement event.

Our last step was to draft a survey. Our target audience was current and former Instructional Designers and Project Managers. Because this was a one-time shot, we asked questions related to every hypothesis listed above. We also asked questions to determine if we had missed anything - especially since we were tapping into former volunteers as well.

Essentially we wanted to know:

Are the performance criteria, guidelines, feedback, and processes clear?

Are the tools easy to use?

Are the benefits of performance satisfactory?

Are there any benefits or consequences for non-performance?

Do volunteers have the knowledge and skills necessary to perform this job?

What's going well and what needs to be improved?

We used five-point likert scales for questions as well as an open-ended comment box for each area of the BEM.

Final Thoughts

Our survey ran for two weeks. We marketed our survey by sending reminder emails on days 3, 5, 8, and 11, as well as posting messages on the organization's LinkedIn page. Our target population totaled at 100 with approximately 30 current and 70 former volunteers. Our response number was 34, which was around our expectations considering a majority were no longer with the organization. When you have a population size of 100 or less, you need a high number of responses to get the usual 95% accuracy plus or minus 5%. What this means is that our information has a higher margin of error. Ideally we would extend the survey and try to push our numbers up. However, our time is up and we can't continue to send reminder emails to people who are no longer with the organization. We have to continue on and explain our data to stakeholders so they can make in informed decision. 

The results of our survey validated nearly every hypothesis, meaning most current and former volunteers had similar experiences as those we interviewed. Because we triangulated this data between current and former Project Managers, Instructional Designers, Directors, the Chief Executive Officer, and to our benefit, a large number of current volunteers responded (n=25), we feel confident in moving forward with our project. Our next step will be to make recommendations to improve performance.

No comments:

Post a Comment