Saturday, March 19, 2016

New Drum Lessons

In a former life I was a musician and studied Music Education at the University of Missouri. At some point around 2002 I became burned out, left the program, and switched to business. Although I haven't played much in the past decade I still remember enough to be dangerous. Now that my son is coming of age I've decided to teach him to drum.

A little background:
My son seems to have some natural musical ability. at a very young age he seemed to pick up on tones and rhythms. So about a year and a half ago my wife and I put him in piano lessons. We pushed him to practice hard for the first year, but for the past six months we've left it more up to him to practice and be prepared for his lesson every week. Now he wants to learn to drum in addition to piano.


3/19/16 Piano Homework
My approach:
I began a few weeks ago by setting him up with a real feel drum pad and a basic snare book "A fresh approach to the snare drum" by Mark Wessels. I've established three rules: 1) I will only teach him as long as he can control his attitude, 2) I'm not going to tell him to practice (like piano), 3) if he's not ready for a lesson I'm not going to move him on to the next. After about three weeks we finally made it past the first lesson. I've spent quite a bit of time with him on grip, using his wrist and fingers, and his stroke in general.
Real Feel pad and Lesson book (lesson 1)
I've considered that this approach will allow him to develop individual technique, but it will leave gaps in his overall ability because he is only play solo. To give him an opportunity to get some ensemble experience I am working to put together a non-profit youth percussion ensemble similar to the Louisville Leopards. In the meantime I've decided to try a different approach by incorporating the PlayStation 4 game Rock Band.
Rock Band 4 Drum Pad
Why I think this will work:
I am betting on adult learning theory on this one. Although this is unorthodox, it is supported by Edward Thorndike's Theory of Identical Elements which states that learning is more likely to occur when it's as close to the actual performance as possible. Second, Social Learning theory states that learning transfer occurs better when the desired performance is modeled (me playing too). Third, immediate and specific feedback. If he's out of time the system will tell him immediately. Fourth, gamification. He can start on easy mode and work his way up to hard. He can learn songs he wants to learn. Ohh, and I can play the guitar too so it'll be fun at the same time.







Sunday, November 10, 2013

New Ventures into Pedagogy

     Most people that have met me within the past ten years do not know that I once studied Music Education, with a focus on percussion. I first started playing back in fifth grade. The first time I picked up a set of drum sticks I was hooked, and I annoyed my family all the way through high school. Rudimental Percussion was my thing, but after four years of college I became exposed to a wide variety of percussion instruments. After all that time and energy I burned out. I boxed up my drum pad and left the world of music far behind.

     Fast forward twelve years, I am now an Instructional Designer at Boeing. My music career, or lack there-of, all but a distant memory. That was until I started noticing what I will refer to as a "natural rhythm" developing in my son. That's right, without any prompting on my part I began to notice him drumming to songs - and he stays on beat. The last few months I've began to re-gain that passion for playing that I once felt. I pulled out my drum pad and started playing, it still felt natural, like I had never quit.

     I began wondering, "what could I do with my combined percussion and instructional design experience?" Could I use my knowledge to develop lesson plans to inspire and teach my son the world of percussion? I thought hard about it but I worry that my son would see it as a chore and not an exciting activity. And then a funny thing happened, I ended up coaching my son's soccer team. I had never played soccer before, but that didn't really matter. The goal for U6 soccer is really to inspire kids to enjoy the sport. Humm, I wondered if there was anything like this in the area for percussion? There's not. In fact, there's not much like that in the United States.

     Over the past few weeks I've began forming a vision. Create a kids percussion ensemble that would generate a love for playing percussion, teach them to play in groups, and perform in front of crowds. There's a lot of work to do, and a lot of questions. For example, where would we practice? How will I recruit enough kids? How will I get instruments? 

So many things to do before I even design my first lesson.

Monday, January 14, 2013

Adventures in HPT: Project 1: Phase 1: Part 2 Data Collection

Adventures in HPT

In a previous post I introduced a current Front-End Analysis project I'm working on for The KeelWorks Foundation. Once our project alignment was complete, it was time to begin looking into the problem. Recall that our problem statement was that volunteers were not producing elearning courses, a critical success factor for the organization.

Our next task was to get a better understanding of the problem. To do this we created a Data Collection plan:

1) Interview current volunteers (As many as we need to until we've heard it all)
2) Interview Directors (To get an idea of potential issues from their perspective)
3) Survey current and former volunteers (To validate our hypotheses)

Constructing Hypotheses

We created a set of open-ended questions for our interviews with the intent that we would ask probing questions when necessary. We started with background-type questions to open communication. We asked questions like:

"Tell me about your background with Instructional Design."

"What volunteer work have you done in the past?"

The first question was also asked to get a better idea of their knowledge and skills. The second question was asked to get an idea of expectations as a volunteer organization. Was KeelWorks asking them to do too much? Too little? What were their other volunteer experiences like?

Next, we started asking about KeelWorks:

"What attracted you to KeelWorks?

"What were your expectations when you started volunteering with KeelWorks?"

We wanted to know why they were here, and what did they want to get out of this experience. Then we began asking about their idea of what success and failure would look like. We wanted to know what they would consider a successful experience to be.

We asked them about their experience with virtual work. We wanted to determine if working completely virtually was acting as a barrier to performance.

Finally, we asked what they were actually experiencing. We wanted to know what day-to-day life was like for them. Were their experiences meeting their idea of success? Were there any issues/problems? Was anything getting in their way, or making work difficult?

Over several interviews we began to get a more clear picture of what was happening. We then began to interview Directors (We did it this way because of availability, it would have been easier to start from the top). We wanted to know if they were aware of any performance issues. We took information from those interviews and compared it with what we learned from the volunteers (Data triangulation).

We then used Gilberts Behavioral Engineering Model as a frame, and our survey to test our hypotheses.


It's easy for managers to blame performance problems on their workers and claim they don't know what they are doing... and then request training to solve the problem. Thankfully, KeelWorks is not taking that approach - realizing performance problems are likely more systemic in nature. As you can see from our hypotheses, this problem may require a performance improvement solution rather than a performance improvement event.

Our last step was to draft a survey. Our target audience was current and former Instructional Designers and Project Managers. Because this was a one-time shot, we asked questions related to every hypothesis listed above. We also asked questions to determine if we had missed anything - especially since we were tapping into former volunteers as well.

Essentially we wanted to know:

Are the performance criteria, guidelines, feedback, and processes clear?

Are the tools easy to use?

Are the benefits of performance satisfactory?

Are there any benefits or consequences for non-performance?

Do volunteers have the knowledge and skills necessary to perform this job?

What's going well and what needs to be improved?

We used five-point likert scales for questions as well as an open-ended comment box for each area of the BEM.

Final Thoughts

Our survey ran for two weeks. We marketed our survey by sending reminder emails on days 3, 5, 8, and 11, as well as posting messages on the organization's LinkedIn page. Our target population totaled at 100 with approximately 30 current and 70 former volunteers. Our response number was 34, which was around our expectations considering a majority were no longer with the organization. When you have a population size of 100 or less, you need a high number of responses to get the usual 95% accuracy plus or minus 5%. What this means is that our information has a higher margin of error. Ideally we would extend the survey and try to push our numbers up. However, our time is up and we can't continue to send reminder emails to people who are no longer with the organization. We have to continue on and explain our data to stakeholders so they can make in informed decision. 

The results of our survey validated nearly every hypothesis, meaning most current and former volunteers had similar experiences as those we interviewed. Because we triangulated this data between current and former Project Managers, Instructional Designers, Directors, the Chief Executive Officer, and to our benefit, a large number of current volunteers responded (n=25), we feel confident in moving forward with our project. Our next step will be to make recommendations to improve performance.

Saturday, January 5, 2013

Collecting Data for Analysis

Introduction

Collecting data for analysis is extremely important to our efforts as performance improvement professionals. Our goal is to understand the problem as it exists, and to understand where we want to go so that we can complete a Gap Analysis. This requires facts... not opinions, facts. Usually when we begin a project we are confronted with opinions about what the problem is, and what the solution should be. We can choose to use these opinions as a hypothesis - at most.

Once my Alignment meeting is complete and the stakeholders and I agree on the problem, my next step is to collect data for the analysis.

Data Collection Methods

To conduct data collection I typically need access to the following items/people:

1) Extant data (Organizational/Industry Processes, Procedures, Polices, and Guidelines, Org Charts, Personnel data, etc)
2) Subject Matter Expert (SME) (Someone who is considered an expert in the field)
3) Accomplished Performer (AP) (Someone who excels and is performing the task right now)
4) Stakeholders (People who can make decisions regarding the topic)

Of course, there are many ways to collect data. Below are five methods I use depending on the situation I'm in.

METHOD 1: IF I can get SMEs, APs, and Stakeholders in the same room at the same time for a length of time, I conduct a Focus Group. My data collection would go like this:

1) Collect Extant Data
2) Conduct Focus Group

This is the quickest way to collect data and reach conclusions. I've found it to also be the most difficult to set up, and perform. To begin, I ALWAYS collect Extant Data, ALWAYS. I want to walk into that Focus Group ready to speak intelligently about the subject. I need to understand the situation, the politics as well as the processes and procedures. Focus Groups are difficult because it's tough to get all of these key people in the same room and the same time for a period of time. If I manage to get a focus group together, the next difficult part is to facilitate the group.

METHOD 2: IF I can't get those individuals together for a focus group, my data collection would go like this:

1) Collect Extant Data
2) Interview SME
3) Interview AP
4) Conduct Observation
5) Verify with SME/AP

As I said, if I can't get a focus group, the next best thing is to personally interview the SMEs and APs (after collecting Extant Data of course). Once those interviews are complete, it's helpful if I can follow up with an observation of the performance. I find that SMEs and APs are usually not able to fully articulate a process. Once I've documented the process through Interviews, I use observation to figure out the details. Once my observation is complete, meet again with our SMEs and APs to discuss what I learned and/or deconflict anything that differed with what they told me.

METHOD 3: IF there are no Accomplished Performers or Observation, my data collection would go like this:

1) Collect Extant Data
2) Interview SME
3) Survey employee group
4) Verify with SME/AP

As with Method 2, I would begin with Collecting Extant Data and Interviewing SMEs. As I mentioned before, I've found that most SMEs are not able to fully articulate the process for one reason or another. Because SMEs are not performing the task right now, I want to verify what they know with what is currently happening. I take the information learned from the SME interview(s) and create a survey, which I launch to as many employees in the target audience as possible. Information from the SMEs will either support or detract from the original hypothesis. You'll use the employee survey to test your hypothesis. Once again, after the survey is complete I meet with our SMEs  to discuss what I learned and/or deconflict anything that differed with what they told me.

METHOD 4: Sometimes I am not able to interview SMEs OR APs or conduct observation. In that case, my data collection would look like this:

1) Collect Extant Data
2) Survey employee group

If I'm not able to conduct interviews or observation, I will Collect Extant Data and use that to create an employee survey. The survey is used to validate hypothesis I created from analyzing the Extant Data.

METHOD 5: Sometimes I do not have access to survey employees OR conduct an observation. In that case, my data collection would look like this:

1) Collect Extant Data

This is a worst-case scenario. If I only have access to Extant Data, I will make assumptions from that data - and make recommendations based on those assumptions. When this happens we usually end up piloting the intervention and doing the data collection process there. Intervention development becomes and iterative process, which can be time consuming.

Final Thoughts

Coming up with a data collection plan is important to do early on in any project. In my experience, as soon as you've agreed on the problem with the stakeholder, you should immediately begin thinking about how to collect data - and start reserving the needed resources.

You may have hypotheses from the alignment meeting. If you do not already have a hypothesis and  you're doing a focus group, it may be best to guide the key players to an understanding of the problem - this will create a higher level of buy-in on the solution. If you're not doing a focus group, your hypotheses will be developed between the Extant Data collection and your interviews, and tested during observation and surveys.

A common question asked is "how long do I keep collecting data?" My Needs Assessment professor from Graduate School, Dr. Don Winiecki always used to tell us "keep collecting data until you stop hearing new things." In other words, keep interviewing your AP(s) until you've heard all of the variations at least twice (time permitting of course). Once your interviews stop uncovering new details, you are ready to analyze the data and form your hypotheses. Ideally, the survey should be used to test or validate those hypotheses. If you're using a survey to uncover new things, it's because either your interviews were incomplete, or you were not allowed to do interviews in the first place. Surveys are typically a one-shot deal, so they need to be specific/targeted questions, ideally quantitative so it's easier to come to a conclusion about the results.

Tuesday, January 1, 2013

Comparing Analysis Models

A recent LinkedIn discussion got me thinking about the differences between Front-End Analysis, Needs Assessment, and Performance Analysis. It seems like those terms are thrown around interchangeably in our field, while some insist there are differences. I decided to do some investigation.

For my investigation I've read through various articles, my collection of books on Needs Assessment, my old ABCD workbooks, and various websites. I've come to the conclusion that all three models have three tools in common: Gap Analysis, Root Cause Analysis, and Intervention Selection. This is supported by ISPIs definition:

"Front-end Analysis (FEA), Needs Assessment, Performance Analysis - in most contexts, these mean the same thing. Their goal is to identify “performance gaps” which can be “closed” with “interventions.” To find these gaps, these analysis processes identify the current and the desired performance state, or what exists and what should exist, or actuals and optimals. The optimal set of conditions is best found by identifying Accomplished Performers (or the “Exemplar”) and observing their performance" (Unknown, ISPI).

There's also this:

"Gap analysis, needs analysis, and performance analysis are occasionally
used as synonyms for needs assessment, yet they are more frequently (and
more accurately) defined as needs assessment tools
" (Watkins, 2012, p. 16).


And this:

"Performance analysis (PA) is partnering with clients and customers to help them define and achieve their goals. PA involves reaching out for several perspectives on a problem or opportunity; determining any and all drivers toward or barriers to successful performance; and proposing a solution system based on what is learned, not on what is typically done" (Rossett, 2009, p. 20).


Allison Rossett speaks of Needs Assessments as "Training Needs Assessments" (TNA). She defines TNA as what is done AFTER a Performance Analysis to "design and develop instructional and informational programs and materials" (Rossett, 2009, p. 31).

And this:

Needs Assessment is "a diagnostic process that relies on data collection, collaboration, and negotiation to identify and understand gaps in learning and performance and to determine future actions" (Gupta, 2007, p. 310).

And this:


"Assessments are used to identify strategic priorities, define results to be accomplished, guide decisions related to appropriate actions to be taken, establish evaluation criteria for making judgments of success, and inform the continual improvement of activities within organizations" (Watkins, needsassessment.org).

Comparisons

I believe Ryan Watkins and Roger Kauffman would say that Needs Assessments are used to identify gaps and prioritize interventions.

I do not have a definition of Front-end Analysis from Joe Harless - and admittedly do not have his book on the subject. What I do have is his ABCD Method which he offers two types of Front-End Analysis: New Performance Planning (NPP) and Diagnostic. An NPP FEA is used to determine what is needed for optimal performance for any new intervention within the organization. A Diagnostic FEA is used when there is an organizational goal not being met. A Gap Analysis is completed using Accomplished Performers, which leads to a Root Cause Analysis and Intervention Selection.

All this being said, there are several tools used in Analysis as Performance Improvement professionals that have not been mentioned, but could certainly be used when determining interventions. In Ryan Watkins book "A Guide to Needs Assessment" (2012), he categorizes tools into two categories; Data Collection, and Decision Making.
Here's how I believe the following methods would handle an organizational problem:

Diagnostic FEA:
1. Clarify Organizational Goal not meeting standards
2. Conduct Extant Data research
3. Interview Accomplished Performer (Job Analysis)
4. Observe Accomplished Performer (Task Analysis)
5. Conduct Gap Analysis
6. Conduct Root Cause Analysis
7. Identify Interventions
(When you are done, if the intervention is Training, you already have a completed Job and Task Analysis!)


Performance Analysis
1. Clarify problem
2. Collect Organization and Environmental Information (interviews, observation, extant data)
3. Conduct Gap Analysis
4. Conduct Root Cause Analysis
5. Identify Interventions

Needs Assessment
1. Clarify problem
2. Collect information on the problem
3. Conduct Gap Analysis
4. Conduct Root Cause Analysis
5. Identify Interventions
6. Prioritize Interventions

(Very flexible in the use of tools, processes, and procedures)

Enterprise Process Performance Improvement
1. Clarify the problem
2. Create Performance Model (Job/Task Analysis, Gap Analysis, Root Cause Analysis) through a focus group (Accomplished Performers + stakeholders)
3. Determine if the Process is the problem
4. If not, or if necessary Conduct Human and Environmental Asset Assessment
5. Identify Interventions

(This process is quick because most of the analysis happens in one sitting. The use of Focus Groups is key. Like a Diagnostic FEA, if training is the intervention, you already have a Job and Task Analysis - plus you have a Knowledge and Skills Analysis)

Final Thoughts

When can we say an industry is mature, and how does having an agreed upon set of terminology play into that? It seems like we have an agreed upon set of tools, but we can't agree on what to call it, I recall my early exposure to these processes being confusing for that reason. If we did an analysis on an organization and found they had four or more names for the same process, what would we say to them about the effect that has on employee performance? Until we can agree as a Performance Improvement industry, the models we choose will come down to personal preference. In my experience, a Diagnostic FEA, and EPPI are better approaches if you are working within a training function because they both produce analysis you will need for training development. If you are working in the field of Performance Improvement (in the broad sense), Performance Analysis and Needs Assessment leave open the possibility of using multiple tools and processes depending on the situation (not that you couldn't use multiple tools with FEA or EPPI).

I would be interested to hear any additional thoughts and opinions on the subject.

References:

Gupta, Kavita. (2007). A practical guide to needs assessment (2nd Ed). San Francisco, CA: Pfeiffer.

Rossett, Allison. (2009). First things fast: A handbook for performance analysis (2nd ed.). San Francisco, CA: Pfeiffer.

Unknown. (Unknown). Human performance technology (HPT) primer. Retrieved from http://www.afc-ispi.org/Repository/hptprimer.html.

Watkins, Ryan. (2012). A guide to assessing needs: essential tools for collecting information, making decisions, and achieving developmental results. Washington DC: The World Bank.

Watkins, Ryan. (2012). Your complete resource site on needs and needs assessments. Retrieved from http://www.needsasessment.org.

Wednesday, December 26, 2012

Three Process Mapping Tools

Process Mapping is a method of recording the elements of a process in order to gain a better understanding of the steps, people, and variance involved. In this blog I will look at three types of process maps:

1) Swim Lane flowchart
2) Conversion mapping
3) PACT Performance Modeling

Swim Lane Flowchart

A Swim Lane Flowchart breaks a process down into sub-processes and sorts each sub-process into categories. Each sub-process is sorted vertically or horizontally into rows by the group or person responsible  for its performance. To create a Swim Lane Flowchart, first identify the people or groups responsible for the process and set up horizontal or vertical lanes (see example below). Identify the outcome of the process... what is being produced. Identify the stimulus to begin the process, then fill in the process from the stimulus to the outcome. See example below:

Swim Lane Flowchart


Conversion Mapping

Another way to map a process is by using Conversion Mapping. Conversion Mapping is less detailed than Swim Lane Flowcharts, but allows you the opportunity to identify variance at each sub-step. The process is similar to creating a Swim Lane Flowchart except there is no space to identify identify who is responsible for each sub-step. After identifying each sub-step, identify the inputs and outputs. In the example below, the outputs from many of the sub-steps are also inputs for the next step. One benefit a Conversion Map has over a Swim Lane Flowchart is that you can identify Variance for each Input and Output. Identifying Variance is helpful if you are looking to improve the process.

Conversion Map


PACT Performance Modeling

PACT stands for "Performance-based Accelerated Customer/Stakeholder driven Training and Development of any blend." PACT is Guy Wallace's performance improvement system of which his Performance Model is a part. Guy's Performance Model is outlined in detail on his blog, here.

PACT Performance Model


Final Thoughts

The Swim Lane Flowchart is a great way to visualize the process and the people/groups involved. The problem I see with the Swim Lane Flowchart is that they do not provide a way to identify variance within a process, therefore they are limited in application.

Conversion Mapping is a great way to visually display the entire process and variance for each input and output within the process. However, unlike the Swim Lane Flowchart, Conversion Mapping does not identify the people or groups involved in each sub-step.

I see the Performance Model as a combination of the Swim Lane Flowchart and the Conversion Map. You identify the Output and sub-steps within the process, and then under the roles and responsibilities columns you identify the groups or people involved. Then under the "Typical Performance Gaps" column you identify the variance. And then, as Billy Mays would say "But Wait, There's More!" In addition to identifying the process, sub-processes, people, groups, and variance, the PACT Performance Model also has space for a Root Cause Analysis.

Thursday, December 20, 2012

Adventures in HPT: Project 1: Phase 1 FEA: Part 1 Project Alignment

Adventures in HPT

This post marks the beginning of a series that I'd like to call "Adventures in HPT." Five months ago I accepted a volunteer position as the Director of Human Performance Technology (HPT) for the KeelWorks Foundation. KeelWorks (founded in 2008) is a Non-profit organization aimed at developing business competencies in anyone in need.

From the Executive Director:

"The actual purpose of Keelworks is to help bring more individuals in to the contribution and "have" zone. KeelWorks aims to inculcate core competencies like problem-solving, overt personal identity development, assertiveness, goal and project management, and teambuilding. They hope to help pampered rich kids, struggling urban youth, and unsocialized youth develop the foundational emotional intelligence that supports success. 

This non-profit expects to deliver this boon to the poor and the disenfranchised, as well as the enabled. This will require significant support resources. In some cases, they'll be bringing services to communities without internet, electricity, or computers. Their product will be virtual because many in need can't come to them. 


This non-profit doesn't have a fund-raising department, instead, they leverage internships designed to help individuals with learning gain practical experience to support their career ambitions."




Adventures in HPT will chronicle projects I complete while serving in this position. All information shared in this blog is with the consent of the KeelWorks Foundation, who is dedicated to 100% transparency in its actions and operations.

The HPT department at KeelWorks is new, and has been given full range to operate within KeelWorks, and also has been offered complete cooperation in support of our data collection processes. My team consists myself and Mrs. Perri Kennedy M.Sc. Perri and I worked on several projects together in graduate school.She has an excellent mind for analysis and evaluation, which is why I asked her to join my team.

Project 1: Our first project with the KeelWorks Foundation


Our first order of business as an HPT department was to speak with the stakeholders and get an idea of how the organization is operating. (Full disclosure: I have been working with KeelWorks for over a year and a half as an Instructional Designer, Project Manager, and Director of Project Management before becoming the Director of HPT.)


Phase 1: Front End Analysis



During our initial conversation we discussed several areas of concern. Because the organization has no financial budget, we asked the stakeholders to identify which areas of concern were their biggest pain points. Out of that conversation we had two points of investigation:
1. Volunteers are not producing.
2. There is a high level of volunteer turnover.

We were not ready to say how confident we were that these problems were factual (vs opinion) because the stakeholders were not able to provide us with quantifiable data during the call.

Part 1: Project Alignment Meeting


With an official project request, we were ready to begin project alignment (as Joe Harless would say). Our key point of contact was the primary stakeholder of the organization, who had already promised full resource support for our project. Our next task was to gain a better understanding of the problem so that it could be accurately defined. We looked for answers to the following questions:

Q: What prompted the request?

A: Both pain points were visibly noticeable to the stakeholders. After four years of existence the intern teams are still progressing slowly; despite some progress, none of the six teams have completed a course. Additionally, most interns and volunteers stay an average of three months and put in only 2.5 hours per week of the promised four.

Q: What is the organization's basic business goal?

A: Courses available for potential customers to access.

Q: What job(s) are impacted by this project?

A: All team volunteer positions at KeelWorks (Instructional Designers and Project Managers)

Q: What Outcomes are deficient?

A: Course is ready for development

Q: What is the project priority?

A: Top Priority

We determined that this project would most likely be a "Diagnostic Front End Analysis." For the record, a Diagnostic Front End Analysis is one that looks to improve existing performance (vs develop new performance).

Q: What is the goal of this project?

A: To conduct a Diagnostic FEA on the deficient outcome: Course is ready for development.

Q: What are the project constraints?

A: It may be difficult to conduct interviews or get responses via survey. All volunteers work virtually and are geographically disbursed. Additionally, busy schedules and/or past negative experiences with the organization may limit participation.

Q: What are the project parameters?

A: Interviews will need to be conducted via phone or Skype. Data sources include organization file archives in Google Drive.

Our project plan is to conduct phone-based interviews with several current and former interns (as available). Information collected from those interviews will be analyzed for possible root causes. That information will be converted to an online survey to reach the entire population. We will use that information to identify root causes and make recommendations to the stakeholders.