Monday, January 14, 2013

Adventures in HPT: Project 1: Phase 1: Part 2 Data Collection

Adventures in HPT

In a previous post I introduced a current Front-End Analysis project I'm working on for The KeelWorks Foundation. Once our project alignment was complete, it was time to begin looking into the problem. Recall that our problem statement was that volunteers were not producing elearning courses, a critical success factor for the organization.

Our next task was to get a better understanding of the problem. To do this we created a Data Collection plan:

1) Interview current volunteers (As many as we need to until we've heard it all)
2) Interview Directors (To get an idea of potential issues from their perspective)
3) Survey current and former volunteers (To validate our hypotheses)

Constructing Hypotheses

We created a set of open-ended questions for our interviews with the intent that we would ask probing questions when necessary. We started with background-type questions to open communication. We asked questions like:

"Tell me about your background with Instructional Design."

"What volunteer work have you done in the past?"

The first question was also asked to get a better idea of their knowledge and skills. The second question was asked to get an idea of expectations as a volunteer organization. Was KeelWorks asking them to do too much? Too little? What were their other volunteer experiences like?

Next, we started asking about KeelWorks:

"What attracted you to KeelWorks?

"What were your expectations when you started volunteering with KeelWorks?"

We wanted to know why they were here, and what did they want to get out of this experience. Then we began asking about their idea of what success and failure would look like. We wanted to know what they would consider a successful experience to be.

We asked them about their experience with virtual work. We wanted to determine if working completely virtually was acting as a barrier to performance.

Finally, we asked what they were actually experiencing. We wanted to know what day-to-day life was like for them. Were their experiences meeting their idea of success? Were there any issues/problems? Was anything getting in their way, or making work difficult?

Over several interviews we began to get a more clear picture of what was happening. We then began to interview Directors (We did it this way because of availability, it would have been easier to start from the top). We wanted to know if they were aware of any performance issues. We took information from those interviews and compared it with what we learned from the volunteers (Data triangulation).

We then used Gilberts Behavioral Engineering Model as a frame, and our survey to test our hypotheses.


It's easy for managers to blame performance problems on their workers and claim they don't know what they are doing... and then request training to solve the problem. Thankfully, KeelWorks is not taking that approach - realizing performance problems are likely more systemic in nature. As you can see from our hypotheses, this problem may require a performance improvement solution rather than a performance improvement event.

Our last step was to draft a survey. Our target audience was current and former Instructional Designers and Project Managers. Because this was a one-time shot, we asked questions related to every hypothesis listed above. We also asked questions to determine if we had missed anything - especially since we were tapping into former volunteers as well.

Essentially we wanted to know:

Are the performance criteria, guidelines, feedback, and processes clear?

Are the tools easy to use?

Are the benefits of performance satisfactory?

Are there any benefits or consequences for non-performance?

Do volunteers have the knowledge and skills necessary to perform this job?

What's going well and what needs to be improved?

We used five-point likert scales for questions as well as an open-ended comment box for each area of the BEM.

Final Thoughts

Our survey ran for two weeks. We marketed our survey by sending reminder emails on days 3, 5, 8, and 11, as well as posting messages on the organization's LinkedIn page. Our target population totaled at 100 with approximately 30 current and 70 former volunteers. Our response number was 34, which was around our expectations considering a majority were no longer with the organization. When you have a population size of 100 or less, you need a high number of responses to get the usual 95% accuracy plus or minus 5%. What this means is that our information has a higher margin of error. Ideally we would extend the survey and try to push our numbers up. However, our time is up and we can't continue to send reminder emails to people who are no longer with the organization. We have to continue on and explain our data to stakeholders so they can make in informed decision. 

The results of our survey validated nearly every hypothesis, meaning most current and former volunteers had similar experiences as those we interviewed. Because we triangulated this data between current and former Project Managers, Instructional Designers, Directors, the Chief Executive Officer, and to our benefit, a large number of current volunteers responded (n=25), we feel confident in moving forward with our project. Our next step will be to make recommendations to improve performance.

Saturday, January 5, 2013

Collecting Data for Analysis

Introduction

Collecting data for analysis is extremely important to our efforts as performance improvement professionals. Our goal is to understand the problem as it exists, and to understand where we want to go so that we can complete a Gap Analysis. This requires facts... not opinions, facts. Usually when we begin a project we are confronted with opinions about what the problem is, and what the solution should be. We can choose to use these opinions as a hypothesis - at most.

Once my Alignment meeting is complete and the stakeholders and I agree on the problem, my next step is to collect data for the analysis.

Data Collection Methods

To conduct data collection I typically need access to the following items/people:

1) Extant data (Organizational/Industry Processes, Procedures, Polices, and Guidelines, Org Charts, Personnel data, etc)
2) Subject Matter Expert (SME) (Someone who is considered an expert in the field)
3) Accomplished Performer (AP) (Someone who excels and is performing the task right now)
4) Stakeholders (People who can make decisions regarding the topic)

Of course, there are many ways to collect data. Below are five methods I use depending on the situation I'm in.

METHOD 1: IF I can get SMEs, APs, and Stakeholders in the same room at the same time for a length of time, I conduct a Focus Group. My data collection would go like this:

1) Collect Extant Data
2) Conduct Focus Group

This is the quickest way to collect data and reach conclusions. I've found it to also be the most difficult to set up, and perform. To begin, I ALWAYS collect Extant Data, ALWAYS. I want to walk into that Focus Group ready to speak intelligently about the subject. I need to understand the situation, the politics as well as the processes and procedures. Focus Groups are difficult because it's tough to get all of these key people in the same room and the same time for a period of time. If I manage to get a focus group together, the next difficult part is to facilitate the group.

METHOD 2: IF I can't get those individuals together for a focus group, my data collection would go like this:

1) Collect Extant Data
2) Interview SME
3) Interview AP
4) Conduct Observation
5) Verify with SME/AP

As I said, if I can't get a focus group, the next best thing is to personally interview the SMEs and APs (after collecting Extant Data of course). Once those interviews are complete, it's helpful if I can follow up with an observation of the performance. I find that SMEs and APs are usually not able to fully articulate a process. Once I've documented the process through Interviews, I use observation to figure out the details. Once my observation is complete, meet again with our SMEs and APs to discuss what I learned and/or deconflict anything that differed with what they told me.

METHOD 3: IF there are no Accomplished Performers or Observation, my data collection would go like this:

1) Collect Extant Data
2) Interview SME
3) Survey employee group
4) Verify with SME/AP

As with Method 2, I would begin with Collecting Extant Data and Interviewing SMEs. As I mentioned before, I've found that most SMEs are not able to fully articulate the process for one reason or another. Because SMEs are not performing the task right now, I want to verify what they know with what is currently happening. I take the information learned from the SME interview(s) and create a survey, which I launch to as many employees in the target audience as possible. Information from the SMEs will either support or detract from the original hypothesis. You'll use the employee survey to test your hypothesis. Once again, after the survey is complete I meet with our SMEs  to discuss what I learned and/or deconflict anything that differed with what they told me.

METHOD 4: Sometimes I am not able to interview SMEs OR APs or conduct observation. In that case, my data collection would look like this:

1) Collect Extant Data
2) Survey employee group

If I'm not able to conduct interviews or observation, I will Collect Extant Data and use that to create an employee survey. The survey is used to validate hypothesis I created from analyzing the Extant Data.

METHOD 5: Sometimes I do not have access to survey employees OR conduct an observation. In that case, my data collection would look like this:

1) Collect Extant Data

This is a worst-case scenario. If I only have access to Extant Data, I will make assumptions from that data - and make recommendations based on those assumptions. When this happens we usually end up piloting the intervention and doing the data collection process there. Intervention development becomes and iterative process, which can be time consuming.

Final Thoughts

Coming up with a data collection plan is important to do early on in any project. In my experience, as soon as you've agreed on the problem with the stakeholder, you should immediately begin thinking about how to collect data - and start reserving the needed resources.

You may have hypotheses from the alignment meeting. If you do not already have a hypothesis and  you're doing a focus group, it may be best to guide the key players to an understanding of the problem - this will create a higher level of buy-in on the solution. If you're not doing a focus group, your hypotheses will be developed between the Extant Data collection and your interviews, and tested during observation and surveys.

A common question asked is "how long do I keep collecting data?" My Needs Assessment professor from Graduate School, Dr. Don Winiecki always used to tell us "keep collecting data until you stop hearing new things." In other words, keep interviewing your AP(s) until you've heard all of the variations at least twice (time permitting of course). Once your interviews stop uncovering new details, you are ready to analyze the data and form your hypotheses. Ideally, the survey should be used to test or validate those hypotheses. If you're using a survey to uncover new things, it's because either your interviews were incomplete, or you were not allowed to do interviews in the first place. Surveys are typically a one-shot deal, so they need to be specific/targeted questions, ideally quantitative so it's easier to come to a conclusion about the results.

Tuesday, January 1, 2013

Comparing Analysis Models

A recent LinkedIn discussion got me thinking about the differences between Front-End Analysis, Needs Assessment, and Performance Analysis. It seems like those terms are thrown around interchangeably in our field, while some insist there are differences. I decided to do some investigation.

For my investigation I've read through various articles, my collection of books on Needs Assessment, my old ABCD workbooks, and various websites. I've come to the conclusion that all three models have three tools in common: Gap Analysis, Root Cause Analysis, and Intervention Selection. This is supported by ISPIs definition:

"Front-end Analysis (FEA), Needs Assessment, Performance Analysis - in most contexts, these mean the same thing. Their goal is to identify “performance gaps” which can be “closed” with “interventions.” To find these gaps, these analysis processes identify the current and the desired performance state, or what exists and what should exist, or actuals and optimals. The optimal set of conditions is best found by identifying Accomplished Performers (or the “Exemplar”) and observing their performance" (Unknown, ISPI).

There's also this:

"Gap analysis, needs analysis, and performance analysis are occasionally
used as synonyms for needs assessment, yet they are more frequently (and
more accurately) defined as needs assessment tools
" (Watkins, 2012, p. 16).


And this:

"Performance analysis (PA) is partnering with clients and customers to help them define and achieve their goals. PA involves reaching out for several perspectives on a problem or opportunity; determining any and all drivers toward or barriers to successful performance; and proposing a solution system based on what is learned, not on what is typically done" (Rossett, 2009, p. 20).


Allison Rossett speaks of Needs Assessments as "Training Needs Assessments" (TNA). She defines TNA as what is done AFTER a Performance Analysis to "design and develop instructional and informational programs and materials" (Rossett, 2009, p. 31).

And this:

Needs Assessment is "a diagnostic process that relies on data collection, collaboration, and negotiation to identify and understand gaps in learning and performance and to determine future actions" (Gupta, 2007, p. 310).

And this:


"Assessments are used to identify strategic priorities, define results to be accomplished, guide decisions related to appropriate actions to be taken, establish evaluation criteria for making judgments of success, and inform the continual improvement of activities within organizations" (Watkins, needsassessment.org).

Comparisons

I believe Ryan Watkins and Roger Kauffman would say that Needs Assessments are used to identify gaps and prioritize interventions.

I do not have a definition of Front-end Analysis from Joe Harless - and admittedly do not have his book on the subject. What I do have is his ABCD Method which he offers two types of Front-End Analysis: New Performance Planning (NPP) and Diagnostic. An NPP FEA is used to determine what is needed for optimal performance for any new intervention within the organization. A Diagnostic FEA is used when there is an organizational goal not being met. A Gap Analysis is completed using Accomplished Performers, which leads to a Root Cause Analysis and Intervention Selection.

All this being said, there are several tools used in Analysis as Performance Improvement professionals that have not been mentioned, but could certainly be used when determining interventions. In Ryan Watkins book "A Guide to Needs Assessment" (2012), he categorizes tools into two categories; Data Collection, and Decision Making.
Here's how I believe the following methods would handle an organizational problem:

Diagnostic FEA:
1. Clarify Organizational Goal not meeting standards
2. Conduct Extant Data research
3. Interview Accomplished Performer (Job Analysis)
4. Observe Accomplished Performer (Task Analysis)
5. Conduct Gap Analysis
6. Conduct Root Cause Analysis
7. Identify Interventions
(When you are done, if the intervention is Training, you already have a completed Job and Task Analysis!)


Performance Analysis
1. Clarify problem
2. Collect Organization and Environmental Information (interviews, observation, extant data)
3. Conduct Gap Analysis
4. Conduct Root Cause Analysis
5. Identify Interventions

Needs Assessment
1. Clarify problem
2. Collect information on the problem
3. Conduct Gap Analysis
4. Conduct Root Cause Analysis
5. Identify Interventions
6. Prioritize Interventions

(Very flexible in the use of tools, processes, and procedures)

Enterprise Process Performance Improvement
1. Clarify the problem
2. Create Performance Model (Job/Task Analysis, Gap Analysis, Root Cause Analysis) through a focus group (Accomplished Performers + stakeholders)
3. Determine if the Process is the problem
4. If not, or if necessary Conduct Human and Environmental Asset Assessment
5. Identify Interventions

(This process is quick because most of the analysis happens in one sitting. The use of Focus Groups is key. Like a Diagnostic FEA, if training is the intervention, you already have a Job and Task Analysis - plus you have a Knowledge and Skills Analysis)

Final Thoughts

When can we say an industry is mature, and how does having an agreed upon set of terminology play into that? It seems like we have an agreed upon set of tools, but we can't agree on what to call it, I recall my early exposure to these processes being confusing for that reason. If we did an analysis on an organization and found they had four or more names for the same process, what would we say to them about the effect that has on employee performance? Until we can agree as a Performance Improvement industry, the models we choose will come down to personal preference. In my experience, a Diagnostic FEA, and EPPI are better approaches if you are working within a training function because they both produce analysis you will need for training development. If you are working in the field of Performance Improvement (in the broad sense), Performance Analysis and Needs Assessment leave open the possibility of using multiple tools and processes depending on the situation (not that you couldn't use multiple tools with FEA or EPPI).

I would be interested to hear any additional thoughts and opinions on the subject.

References:

Gupta, Kavita. (2007). A practical guide to needs assessment (2nd Ed). San Francisco, CA: Pfeiffer.

Rossett, Allison. (2009). First things fast: A handbook for performance analysis (2nd ed.). San Francisco, CA: Pfeiffer.

Unknown. (Unknown). Human performance technology (HPT) primer. Retrieved from http://www.afc-ispi.org/Repository/hptprimer.html.

Watkins, Ryan. (2012). A guide to assessing needs: essential tools for collecting information, making decisions, and achieving developmental results. Washington DC: The World Bank.

Watkins, Ryan. (2012). Your complete resource site on needs and needs assessments. Retrieved from http://www.needsasessment.org.