5 Needs Assessment and Data Analytics: Understanding Your Constituencies
Neal Legler
Abstract
Needs assessment is an important early step in the development of a mentoring program because it helps ensure that program resources go toward improving prioritized institutional results. Needs assessment should involve key stakeholders, organized into a needs assessment committee, and then follow a systematic process to collect and analyze quantitative and qualitative data and identify existing organizational needs. Needs are defined as the gap between desired organizational results and current results. They should be considered holistically and at all levels of the organization. As needs are identified, the needs assessment committee works with stakeholders through a combination of group management techniques to prioritize needs and identify solutions. The results are shared in a needs assessment report. Big data can provide additional insight to needs assessment by focusing on actual behavior and allowing for greater audience segmentation.
Correspondence and questions about this chapter should be sent to the author – neal.legler@usu.edu
Needs Assessment is Worth the Effort
Let us begin this chapter with some assumptions. First, we will assume that you want to operate a mentoring program that will be successful and cost-effective. You want to identify a demographic of individuals who will benefit from mentoring and then get them to participate and achieve results you can show for your efforts—results that will help you grow your program and reach more people. Finally, you would like to minimize your risks in the process.
If these assumptions are correct, then you need to take the time to perform a proper needs assessment. This recommendation likely comes as a statement of the obvious. Performance-improvement specialists and their like usually think of needs assessment as a useful step in planning. Still, they often skip it (Kaufman & Christensen, 2019) or fail to do it right (Watkins & Kavale, 2014).
The reasons are common. It may seem that needs assessment will take too long—that results are needed before the assessment would permit. Needs assessment may seem redundant. After all, what brought you to this point in the first place? Was it not a recognition by you and/or your administrators that students, faculty, or staff need mentoring? Why put time and effort toward formally identifying something you already seem to know (or have been mandated to do)? Professionals who plan mentoring programs rarely have much formal training or experience in needs assessment, so determining where to start, whom to involve, how and where to find information, and when to stop can all prove challenging.
In this chapter I attempt to remove barriers to doing needs assessment by providing basic information about how to conduct one and what the results ought to be. We begin with definitions, basic steps, and leading models and then move on to methods. We will draw heavily from two sources that go into great depth on the subject. The most comprehensive is the five-volume Needs Assessment Kit produced in 2010 by James Altschuld in collaboration with David Kumar, Nick Eastmond, Jeffrey White, and Jean King. The second is A Guide to Assessing Needs: Essential Tools for Collecting Information, Making Decisions, and Achieving Development Results, written for the World Bank in 2012 by Ryan Watkins, Maurya Meiers, and Yusra Visser. In this chapter I attempt to compile and summarize the advice of these two excellent works. Readers who finish this chapter and want more detail are encouraged to seek them out. In addition to these, I will add to my summary some ideas from the literature on needs assessment for training programs, which has applications to mentoring.
Outcomes of Needs Assessment
In the sense of a needs assessment, a need is defined as a gap between current results (what is) and desired results (what ought to be). A needs assessment focuses on the noun form of the word need rather than the verb form, which is common to statements like “they need” and “we need” (Altschuld & Kumar, 2010; Kaufman & Guerra-Lopez, 2013). A needs assessment defines the gap between current results and desired results. At its conclusion, a successful needs assessment should accomplish the following:
- a clarification of the desired results of the organization (more on this later) and its efforts
- an identification of needs, or in other words, a definition of the relevant gaps between current results and desired results
- a prioritization of needs to identify which should receive attention
- an analysis of the highest priority needs to identify relationships, causal factors, and what is working and not working
- an identification of possible solutions that could achieve desired results
- an evaluation of possible solutions to assess their value and feasibility
- a needs assessment report that summarizes findings and provides recommendations
- buy-in by key stakeholders on the findings and conclusions of the needs assessment (Watkins & Kavale, 2014; Watkins et al., 2012)
Rarely can stakeholder buy-in be achieved when a needs assessment is conducted as an individual exercise (Watkins et al., 2012). Needs assessment often results in resource allocation decisions and can therefore have political ramifications (Altschuld & Kumar, 2010). Various stakeholders may also experience the same needs in different ways (Barbazette, 2006). For these reasons and more, needs assessments should be collaborative exercises that seek to involve representatives from each stakeholder group. This may include students, faculty, staff, administrators, industry representatives, or others who would contribute to or benefit from a mentoring program.
Preconditions for Successful Needs Assessment
To achieve the outcomes listed above, several conditions should be in place before embarking on a needs assessment. These include, but are not limited to, administrative support (see Chapter 6), a budget, a reasonable (but relatively close-range) time frame, a needs assessment coordinator, who may or may not be the program coordinator (see Chapter 7), a preliminary plan of action, an awareness of existing data sources, and access to individuals who can help collect and analyze data.
Needs assessments typically do not require approval from an institutional review board unless the results will be published to the larger academic community or public. However, data security and privacy remain of utmost importance. Needs assessors should be aware of their institution’s data management and security policies. They should have systems and processes in place to ensure that sensitive data is kept and shared securely and that participants in the process are properly trained in handling sensitive data. Of particular importance in educational mentoring is an understanding of the implications and guidelines set by the Family Educational Rights and Privacy Act (Family Education Rights and Privacy Act, 1974).
Defining “Desired Results”
In this chapter, I define results as the outcomes (see Chapter 8) of what the organization and its program seek to achieve. In the context of higher education and mentoring, results entail things like academic or professional performance, persistence and graduation (for students), or retention and promotion (for faculty and staff). It may also entail other priority outcomes, such as participation in research, service activity, internships, and other high-impact practices (Kuh, 2008).
As noted above, a need is a discrepancy between current results and desired results. However, “desired results” can mean different things to different people. They could mean the most ideal results—even if they are unattainable. They could also be interpreted as the most likely results, which can be reasonably accomplished but may not be perfectly ideal. In fact, there is value in identifying both the ideal and most likely results in a needs assessment process. It then becomes the job of the needs assessment team to decide upon the desired results against which needs will be assessed (Altschuld & Kumar, 2010).
Needs assessors should also be careful to separate needs from wants, in some cases categorized as felt needs (Bradshaw, 1972). A felt need may seem important to stakeholders but does not meet the definition of a need as the gap between current and desired results. An example of a felt need would be a department head saying she needs more resources. Her department may very well benefit from more resources, and more resources may later be identified as one solution to a need, but they do not constitute a need in the sense of needs assessment. Instead, it is better to focus on where the department may be falling short of its desired results. In so doing, keep in mind that desired results are not only achieved at the level of the recipients of service but also at the community and employee level. Hypothetically, a department could be doing a good job taking care of its students, but employee morale may suffer while additional important projects go untended.
Bradshaw (1972) also categorized some types of needs as normative or compared against some kind of standard of what is to be expected or normal. Thinking of desired results in a normative sense can help a needs assessment committee define what those desired results ought to be. Scriven and Roth (1990) suggest a reasonable person rule, which implies asking what a reasonable person would say is a need. Hopefully, the needs assessment committee is composed of mostly reasonable people.
Before we move on, let us delve a bit further into the notion of levels at which desired results can be achieved. One of the leading proponents of needs assessment, Roger Kaufman, developed the organizational elements model (OEM) to serve as a hierarchy of planning for needs assessment (Kaufman, 1987; Kaufman, 1992; Kaufman & Christensen, 2019). The OEM states that organizational results occur at the following levels of a hierarchy:
- Mega/Outcomes (societal contributions)
- Macro/Outputs (organizational contributions)
- Micro/Products (individual contributions)
- Processes
- Inputs
Keeping our focus on higher education, the Mega/Outcomes level is the level at which the educational institution impacts society—through the types of graduates it produces, the research it produces, its impact on the economy, and so on. The Macro/Outputs level entails the metrics most university administrators are comfortable thinking about, such as persistence, graduation, GPA, admissions, funding, research, teaching, service, morale, and so on. At the Micro/Products level, we start to focus on the job performance of individual faculty and staff and the quality of specific courses and programs. Then at the Processes level, we evaluate the impact of policy, practices, timelines, information systems, and the like. Inputs, then, include the funding, tools, materials, human resources, and other ingredients that go into the institution’s outcomes and outputs.
Altschuld and Kumar (2010) provide a similar categorization of needs. They define three levels:
- Level 1: Recipients of services (students, if they are the target audience)
- Level 2: Deliverers of services (teachers, advisors, and mentors)
- Level 3: The system supporting Levels 1 and 2 (support staff, buildings, software, policies, and procedures)
Both the Kaufman approach and the Altschuld and Kumar approach to categorizing needs suggest the same thing—the level at which the needs assessment committee begins can make a significant difference in the quality and nature of the needs assessment recommendations. Kaufman and Christensen (2019) suggest entering the hierarchy at the highest level possible. Kaufman assures that “by starting where your external clients are, at the Mega level, and linking to all other organizational elements, you may be better assured that you will be considering root issues and not just the presenting symptoms” (2019).
Focusing on external clients sounds like an obvious recommendation at the start, but Altschuld and Kumar (2010) caution that the focus can easily shift:
Since needs assessments are often conducted by Level 2 personnel, it is no great surprise that many stress, overtly or implicitly, the concerns of Levels 2 and 3 over those of Level 1, with lip service given to the latter.
Losing site of Level 1 needs can come easily because data is often most available for Levels 2 and 3.
In short, needs assessment should be driven by a careful clarification of desired results and considered at a more holistic level than members of the committee may initially want to focus on. It behooves the coordinator of the needs assessment process to make sure this happens.
Process of a Needs Assessment
Let us briefly recap what we have discussed so far. Needs assessment is an important part of program planning because it helps ensure that time and resources spent on solutions will go toward improving results in critical areas. Needs assessment should be a collaborative activity that involves multiple key stakeholders. It should focus holistically on the results of the organization across multiple levels and succeed in identifying relevant needs, prioritizing those needs, analyzing their nature and causes, and then producing recommendations for activities and solutions. Next, we will discuss the process of conducting a needs assessment.
Minimum Steps
According to Watkins et al. (2012) and Stefaniak (2021), a needs assessment should, at a minimum, involve three steps, listed below and then shown in Table 5.1 with a little more detail:
- Identify: Collect data and build the initial list of needs.
- Analyze: Prioritize needs and analyze their causes.
- Decide: Consider possible solutions and make recommendations.
Table 5.1
Minimum Needs Assessment Steps and Activities
Step 1: Identify | Step 2: Analyze | Step 3: Decide |
---|---|---|
|
|
|
Note. From Watkins et al. (2012).
The various models that exist for conducting needs assessment have these three minimum steps built in. However, additional models can provide recommendations for various contexts.
Altschuld Three-Phase Model for Larger-Scale Needs Assessments
One of the most well-known needs assessment models is the three-phase model developed by Altschuld and Kumar (2010). This model roughly aligns with the minimum steps above, but its focus is on committee management, so it is useful for larger-scale needs assessments. The three phases are:
- Phase I: Pre-Assessment: Organizing the committee and reviewing existing data.
- Phase II: Assessment: Organizing the committee to collect and analyze new data
- Phase III: Post-Assessment: Organizing the committee to select, plan, and implement solutions.
Table 5.2 lists some of the activities associated with the three-phase model. For the purposes of this chapter, the steps of Identify, Analyze, and Decide are also included, about where they first occur within each phase.
Table 5.2
Aldschuld and Kumar’s Three-Phase Model for Needs Assessment
Phase I: Pre-Assessment | Phase II: Assessment | Phase III: Post-Assessment |
---|---|---|
—Identify—
|
—Analyze—
|
—Decide—
|
Note. From Altschuld and Kumar (2010).
Phase I
We will now review some recommendations relative to each phase of the Altschuld model (mostly according to Altschuld and Kumar unless otherwise noted). Phase I is all about organization. It involves getting the right team together, making sure they know the purpose and targets of the needs assessment, and then identifying existing data to review and new data to be collected.
The needs assessment committee facilitator performs three roles—planner, maintainer, and coach. It is important for the facilitator to stay on top of tasks and maintain contact with subcommittees as they do their work. Organizers should try to find the optimal size for the committee—one that includes an appropriate cross-section of stakeholders but does not become unwieldy. A small committee may have 5–10 people. Larger ones may have 12–25. Pros and cons exist for both large and small committees, so the decision on size depends on feasibility, what the group will be doing, and the criteria for participants (Altschuld & Eastmond, 2010). When inviting stakeholders to join the committee, it helps to leverage existing campus partnerships for support. Organizers should seek to build a “coalition of the willing” to avoid internal friction (Educational Advisory Board, 2021).
Phase I begins with an orientation meeting in which committee members learn how the assessment got started and what its focus, general process, and budget will be. Everyone should be clear about the current phase of the assessment and the organizational levels it is targeting.
Phase I is the time to appropriately define the scope of the assessment—one that is not so broad that it becomes overwhelming or so narrow that it loses efficacy. Where possible, the focus should be on short-term needs that can be quickly resolved and needs that are of high priority to all involved stakeholders (Witkin, 1984).
Success relies on open communication with stakeholders, budget managers, and decision-makers throughout Phase I (Witkin, 1984). Organizers should plan and budget for ongoing communication with the larger institution and with stakeholders—especially if organizational change is a possibility.
The data-collection process should always start by identifying and evaluating data that already exists in the organization. Altschuld (2010) writes:
Too often it is assumed that it will take a good deal of time and resources to study needs and to draw action-based conclusions from their results. The assumption is not quite correct. Organizations are frequently awash in information, or external groups can supply much that is relevant.
However, the committee should use existing data carefully. It was probably not collected for their exact purposes. It might not even be true, but only perceived as true. Other data sources should be checked for potential shortcomings with the data in later recommendations (McGoldrick & Tobey, 2016).
Altschuld and Eastmond (2010) suggest the generic timeline shown in Table 5.3 for Phase I committee meetings.
Table 5.3
Generic Timeline for Needs Assessment Committee
Session | Description of typical activities |
---|---|
First session |
|
Second session (1–2 weeks later) |
|
Third session |
|
Fourth sessions and/or others |
|
Fifth session |
|
Note. From Altschuld and Eastmond (2010).
As noted in Table 5.3, at the conclusion of Phase I, the committee should decide whether to stop the needs assessment process altogether, proceed to Phase II if more data collection is deemed necessary, or skip to Phase III if there is enough data to allow for analysis and decision-making
Phase II
If the committee determines that more data is advisable, then Phase II is where the collection and analysis of new data occurs. The committee should be informed when they have moved on to Phase II. The next section of this chapter will focus more on various data collection methods. Meanwhile, we will review some general considerations for Phase II, again attributable to Altschuld and Kumar (2010), unless otherwise noted.
The needs assessment committee should feel that new information is warranted before it is collected because it is usually expensive to obtain. When collecting new information, quantitative and qualitative methods should be used together and budgeted for. It may be useful to recruit the assistance of a statistician or statistics graduate for population sampling and statistical analysis (McGoldrick & Tobey, 2016).
The goal of data collection should be to collect enough data to make decisions (Watkins et al., 2012), but only data that will actually be used. Data collection should stop when the data starts to get repetitive (McGoldrick and Tobey, 2016). The main objective is “action, not understanding” (Block, 2000). When selecting collection methods, the time, resources, and other costs needed for each method should be considered. Could cheaper methods yield the same data? What are the logistics? (McGoldrick & Tobey, 2016).
To control for bias, more than one person should be involved in data collection, and where possible, data should be shared with the people from whom it was collected (McGoldrick & Tobey, 2016). It is important to document the data collection process to show how priorities were determined and help with continued data interpretation later. Documentation also helps for later program evaluation. Data tables should be dated in case they are used for later projects.
One of the last steps of Phase II is to spend time determining the activities to occur in Phase III.
Phase III
Phase III is where data is analyzed; needs are identified, refined, and prioritized; and potential solutions are identified and recommended. It is a highly collaborative phase that requires various group management techniques, conflict resolution tactics, and negotiation skills for evaluating data and reaching agreement. Some of these methods will be listed in a later section of this chapter. In Phase III, it might make sense to adjust the needs assessment committee as the focus shifts to action plans (Altschuld & Kumar, 2010).
Needs assessors should not shy away from needs that are not related to mentoring or training. Although the needs assessment may have been initiated with a specific focus on mentoring, the final report should include recommendations and solutions for any relevant needs, regardless of whether they relate to mentoring. Why? Because resolving needs and achieving desired results is the goal of the needs assessment process—not just implementing a specific preidentified program. After all, if nonmentoring recommendations are made but not implemented, and then desired results are not achieved, the mentoring program itself is less likely to be used as a scapegoat. Either way, there is documentation (McGoldrick & Tobey, 2016). A question to continually keep in mind, according to McGoldrick and Tobey, should be “If the ultimate [mentoring] program is perfect, what else is going on in the organization that will result in the [organizational] needs not being met?”
The committee should take time for benchmarking. According to Altschuld and Kumar (2010), “There is no substitute for seeing first-hand how similar needs are being handled by other organizations.” How long have solutions been in operation? What changes were needed and how were they dealt with?
As solutions are identified, plans should be made to pilot test recommended solutions before full implementation. Time should also be taken to debrief and evaluate the needs assessment process and record lessons learned.
The Needs Assessment Report
The final product of needs assessment will be the creation of a needs assessment report. McGoldrick and Tobey (2016) recommend separating analysis from recommendations in reports and presentations—presenting analysis first and then moving on to recommendations while focusing on the items that are within the institution’s power to address. Watkins et al. (2012) suggest the following typical contents of a needs assessment report:
- executive summary
- introduction
- purpose, goals, and objectives
- needs
- methods for identifying needs
- data identifying needs
- actions considered
- methods for identifying alternatives
- data on alternatives
- criteria for comparing
- conclusion
- decisions or recommendations
- acknowledgments
- appendices: Support data and tools and instruments
Data Collection and Analysis Methods
The heaviest proverbial lift of needs assessment is the collection and analysis of data. We now move from a high-level overview of needs assessment outcomes, processes, and models to focus on the nitty-gritty details of obtaining and sorting through data and information. The general order of steps to follow when collecting and analyzing data is:
- Determine what information and data are needed.
- Extract information from existing data sources.
- Collect additional data as needed, using a mix of quantitative and qualitative methods.
- Analyze data to use for decision-making.
Using Existing Data: The Document or Data Review
As mentioned earlier, data that is valuable for a needs assessment may have already been collected and made available in the form of documents, data files, published data sets, reports, and more. Working with existing data is the best starting point because it is usually inexpensive and doesn’t rely on significant input from other sources. When working with existing data, needs assessors should keep in mind that the data probably was not collected for the same purposes as the needs assessment, and the level of quality control used in the data collection process may have been uncertain. Needs assessors should take time to determine the quality of existing data and verify it against other resources (Watkins et al., 2012). Some techniques for systematically reviewing existing data are as follows (Watkins et al., 2012):
- List characteristics to look for when selecting existing data resources. Data does not necessarily have to be organizational. It can come from external sources as long as it can be applied to your population of students and staff.
- Assign at least two people to review existing data resources for the sake of obtaining multiple viewpoints.
- Develop a document review form or checklist to guide document reviewers and help them record their findings in a standardized way that can be compared and coded.
- Collect, consolidate, and code the observations gathered from various reviewers and document sources.
Collecting and Analyzing Additional Data: Methods
Outside of existing archival data, additional methods of collecting data will likely be needed. These can be both qualitative and quantitative in nature. Altschuld & Kumar (2010) recommend a mix of both. Assessors should not rely on just one source of data measurement to identify needs. Table 5.4 provides a compilation of various data collection methods and their typical nature and purposes. References to hard versus soft data are indications of whether the data can be verified with another source (hard) or not (soft) (Altschuld & Kumar, 2010; Watkins et al., 2012; Altschuld & Watkins, 2014; McGoldrick & Tobey, 2016).
Table 5.4
Various Data Collection Methods
Method | Strengths | Weaknesses | Nature |
---|---|---|---|
Surveys | Scaled questions address desired and current results. Can be administered to many people. Allows for multiple perspectives. |
It’s easy to conflate perception and performance data. Follow-up questions are difficult. |
Quantitative, but based upon the values, judgments, and opinions of respondents. Soft data (not externally verifiable). |
Performance assessment /test |
Collects current performance, knowledge, skill, and mastery. Identifies gaps in outcomes. |
Does not provide much opportunity for unforeseen insights. Dependent on the quality and validity of the assessment. |
Quantitative. Hard data (externally verifiable). |
Performance observations | Identifies current performance and can identify desired performance. | The presence of an observer can change performance on the part of the observed. | Qualitative. Hard data, if multiple sources are used. |
Job/task analysis | Defines the desired level of performance and learning. | Can be subjective and idealized. More complex than the actual task. |
Qualitative and quantitative. Hard data, if multiple sources are used. |
Individual interviews | Identifies current performance, values, judgments, and opinions. Allows for discovery and follow-up. |
Takes time and can be difficult to analyze. | Qualitative. Soft data. |
Critical incident interviews | Identifies current performance and can identify desired performance. | Recollections of past behavior can be subjective, idealized, and modified. | Qualitative. Hard data, if multiple sources are used. |
Guided expert reviews | Defines desired performance. Provides perspectives on needs and decisions. Can increase buy-in. |
Experts’ lack of internal knowledge can diminish from recommendations. Personal agendas may interfere. |
Qualitative. Hard data, if multiple sources are used. |
Focus group interviews | Identifies current performance, values, judgments, and opinions. Allows for discovery and follow-up. Collects data from multiple people at once. Allows comments to build on each other. Can form consensus. |
Unequal participation, group think, tangents, and concealment of sensitive information can skew results. | Qualitative. Hard data (with caveats relative to group dynamics). |
SWOT analysis | Group identification of strengths, weaknesses, opportunities, and threats can assist with prioritization. | Does not directly address identification needs, but is supplemental. Subjective to group dynamics. |
Qualitative. Hard data, if multiple sources are used. |
Delphi technique | Achieves consensus from a distributed group of experts on needs, desired results, probable causes, and/or solutions. | Outcomes could be compromised if the facilitator fails to select a representative panel, choose good initial questions, or follow an effective implementation strategy for the technique. Participant dropout is also a risk. | Quantitative and qualitative. Hard data. |
Tutorials and literature abound on these techniques, and higher-education professionals have experienced many of them to some degree. However, we shall attempt a brief, nondefinitive overview of each technique with some additional focus on tactics relative to needs assessment.
Surveys
Using Multiple Response Scales. Surveys are common to needs assessment, and they provide a unique opportunity to present respondents with a question that can be answered from multiple perspectives using two or more scales. The use of multiple scales is helpful in identifying the discrepancy between “what is” and “what should be.” Consider the following simple example:
Figure 5.1
Simple Example of a Double-Scaled Needs Assessment Item
Academic support services | To what extent is this service important to you? (1 = Not important at all, 5 = Extremely important) |
To what extent do you use this service? (1 = Never, 5 = Very frequently) |
Tutoring center | NA 1 2 3 4 5 | 1 2 3 4 5 |
Academic advising | NA 1 2 3 4 5 | 1 2 3 4 5 |
Online study skills courses | NA 1 2 3 4 5 | 1 2 3 4 5 |
Note. Adapted from Altschuld and Kumar’s (2010, p. 87) Needs Assessment: An Overview and Altschuld’s (2010, p. 50) Needs Assessment: Phase II: Collecting Data.
In Figure 5.1, responses can help the researcher identify the perceived value of a service compared to the actual use of the service and therefore begin to quantify gaps between desired results and actual results. An additional scale could be added, perhaps asking respondents to rate their satisfaction with a service. Multiple scales also allow the needs assessor to ask, with a single question, “Are we accomplishing x,” and “Should we be accomplishing x,” where x represents a given result. They are a good idea for needs assessment surveys.
Additional Survey Design Recommendations. However, designing an effective need assessment survey is not quite as simple as just that. More considerations go into effective survey design. Altschuld (2010), Barbazette (2006), and Watkins et al. (2012) suggest several principles.
Researchers should gather preliminary information from preexisting sources before writing the survey. This will help them understand what information gaps to fill and tailor the survey accordingly. They should know ahead of time what will be done with the results of the survey and write survey objectives. Keeping the desired end results in mind will help them avoid asking extraneous or poorly worded questions.
The content of the survey should be carefully selected by referring to early committee decisions on what information is most needed. Those who will interpret and report the data should also be included in the survey design.
Researchers should design and deliver separate surveys for different targeted levels of the organization, such as service recipients, service providers, and members of the support system. Surveys should be tailored to each level, paying attention to the wording and order of questions. If in doubt, survey designers should err on the side of oversampling the most important groups to ensure compiling enough information.
Needs assessment surveys should use at least two scales for survey questions, as noted above. The audience should be considered in order to make sure the scales are easy to understand, and someone should proofread the survey for double negatives, big words, typos, and the like. A good survey usually includes some open-ended questions to collect additional clarifying information. Too many of these questions should be avoided. They can take a long time to analyze.
Researchers may consider asking a few additional questions about obstacles and solutions. Although the needs assessment survey should be primarily focused on needs and not solutions, there might not be a second survey, so it may not hurt to add a few questions that will prove informative to later phases of the needs assessment process.
The survey should be pilot tested and checked for reliability by administering it at different times under the same conditions and seeing if the results match. Pilot testing can help ensure respondents share a common understanding of the survey questions. Pilot testing is also a way to check the validity of the content to see if the questions measure what they were intended to measure.
Surveys have some limitations. They should not be used alone in needs assessment but should be combined with other data collection methodologies. Watkins et al. (2012) caution specifically against confusing the perception data that comes from surveys with performance data. Survey responses are subjective and only represent the respondents’ perceptions of how they, their peers, and others are performing. The data from surveys is quantitative and quantifiable, but it is not hard data. Surveys also do not provide the opportunity to ask follow-up questions, so additional qualitative methods can add significant insights to survey results. As a simple rule, focus groups (or some variation of group feedback sessions) should be accompanied by surveys and vice versa. Focus group participants should be invited to fill out a survey as a way of collecting individual perspectives to compare against group perspectives.
Survey Analysis. Altschuld and White (2010) suggest the steps shown in Table 5.5 for analyzing quantitative needs assessment data.
Table 5.5
Steps for Analyzing Quantitative Data
Step | Notes |
Data collection | Compile the data into a spreadsheet, database, or computer program. |
Data quality | Visually inspect the data to ensure quality and integrity before analysis begins. |
Data manipulation | Make backup copies of the data and then divide them into subsets. Restructure and transform them as needed for the analysis. |
Data analysis | Analyze the data using descriptive statistics and inferential statistics to test hypotheses. Inspect the reliability and validity of the survey. |
Data summary | Summarize the findings in a way that stakeholders and the needs assessment committee can understand and use. |
Note. From Altschuld and White’s (2010, p. 58) Needs Assessment: Analysis and Prioritization.
An initial step when inspecting survey data is to look for anomalies, outliers, and missing data and determine the cause—including whether survey design may have led to unexpected results. Out-of-range and invalid responses should be removed (Wulder, 2005; Altschuld & White, 2010; Dilalla & Dollinger, 2006).
Multi-scale questions should be analyzed for the following:
- Discrepancy: The desired-state value minus the current-state value, in the form of a number.
- Direction: Whether the discrepancy value is positive or negative. If positive, this is a possible opportunity. If negative, this is a possible need.
- Position: The order of discrepancy values relative to each other, i.e., -3, -2, -2, 0, 0, 1, 3.
- Demographic Differences: Nuances in the data that appear when filtering along demographic categories (Watkins et al., 2012).
Where possible, multi-scale questions can be organized into matrices using Altschuld’s (2010) quadrant method, shown in Figure 5.2, with findings sorted accordingly. In Figure 5.2, we assume that a survey question asks respondents to rate, on two 5-point Likert scales, how well a certain goal is being met and how important it is perceived to be, with 5 being high attainment/high performance and 1 being low attainment/low performance. The two scales are placed in a matrix fashion along the horizontal and vertical axes, and then the mean response values for each question are plotted into each quadrant. Each quadrant suggests a certain meaning to be derived from the results. For example, the high importance/low attainment quadrant is the one where the largest needs will land. The high importance/high attainment quadrant is the one where the organization’s most successful outputs ought to land.
Figure 5.2
Quadrant Matrix for Sorting Survey Responses
Mean Goal Attainment | 5 | Below-average importance and above average attainment = No need. “Candidate for further study or cut in resources” |
Above-average importance and above average attainment = No Need “Keep up the good work” |
||||
4 | |||||||
3 | |||||||
2 | Below-average importance and below-average attainment = No Need | Above-average importance and below-average attainment = Need | |||||
1 | |||||||
1 | 2 | 3 | 4 | 5 | |||
Mean Goal Importance |
Note. From Altschuld and White’s (2010, p. 44) Needs Assessment: Analysis and Prioritization.
In scoring survey results, the primary focus should be on trends and patterns in values, attitudes, and behaviors rather than exact numbers and percentages. Quantitative results should be reviewed in tandem with qualitative data to help interpret the results or identify potential problems with the data. As survey findings are considered, five considerations proposed by Patten (2009) are useful, as cited by Altschuld and White (2010):
- What is the cost-effectiveness of the findings?
- Do the implications of the findings make a crucial difference?
- Are the implications acceptable to the stakeholders?
- Are the implications acceptable publicly and politically?
- Are there ethical and legal concerns with the findings?
Reviewing Open-Ended, Qualitative Survey Responses. Most surveys include some open-ended questions for collecting qualitative data. Altschuld and White (2010) provide an overview of steps for analyzing qualitative data, which applies to open-ended survey questions as well as other qualitative methods.
- Review the general structure of the qualitative method.
- Skim a sample of the qualitative data. Get a sense of how the data is structured and its nuances. Look for patterns.
- Begin reviewing responses and list main ideas that seem to emerge as variables. Give these variables a preliminary name.
- Narrow the variables list down to initial data categories (IDCs). These are the variables that seem to keep emerging. Start tagging each response with these categories and keep a tally per question. (Or use a qualitative analysis program to do the grunt work.)
- Move to a higher-level analysis and identify themes per question. Think of a theme as the underlying meaning or explanation for the IDCs.
- Analyze at a higher level still and identify linking/over-arching themes across all questions.
- Review the overarching themes. Do they help you understand needs? Do they fit other data? Do they suggest the need for more information?
- Verify/confirm the quality of the data. Do data from other independent sources concur?
Integrate Quantitative and Qualitative Survey Data. The variables, categories, and themes that emerge while reviewing open-ended questions should be compared against the quantitative survey data, with consideration of how the qualitative findings influence the interpretation of the quantitative findings. Note, of course, that the data and themes collected from other data collection methods should be compared to the survey findings as well.
We will now briefly address some of the characteristics and techniques associated with other data collection methods.
Performance Assessments/Tests
Sometimes the desired level of performance expected of employees, mentors, or students is well defined, so a straightforward performance assessment can identify where results are falling short. By this we mean some sort of test, such as a written test that prompts a recall or synthesis of facts, or an observation of a task completion.
Testing can be challenging to get right. Good tests are valid, meaning they measure what they are intended to measure; reliable, meaning they measure the same thing consistently; fair, meaning they are not biased against particular subpopulations; and secure, meaning they are difficult to cheat (AERA et al., 2014). If available, a test that has already been created through a rigorous process will save time. If that is not available, the following concepts apply to assessment development (Dick et al., 2015).
- Identify or create performance objectives for the skills to be tested and determine a mastery level—sometimes referred to as a cut score—that represents acceptable performance.
- Determine the best way to observe the test taker’s performance. Can a multiple-choice question do the trick? Is a free-form written response needed? Or does the test taker need to be observed doing something? Does it need to be in an authentic environment, or can it be in a more controlled environment?
- Write questions or rating rubrics that are easy to understand and unambiguous. Get feedback and try to test them out on multiple test takers or raters.
- Find a way to offer the test in a way that is monitored to avoid cheating.
When administering tests, needs assessors should gather qualitative feedback from test takers on what they thought or felt or might have needed as they completed the test.
Performance Observations
At first glance, a performance observation may look a lot like a performance assessment. However, there is one key difference. A performance assessment begins with a desired performance in mind and simply assesses whether students or employees meet the criteria or how far they fall short. Performance observations do not begin with a desired level of performance in mind. They are more exploratory. They occur when the needs assessor needs to better understand how students, employees, or customers perform a task—particularly one that isn’t well defined. Performance observations may be conducted with experts, novices, and those in-between as a means of defining what performance at varying skill levels looks like. They can be focused on defining both “what is” and “what should be.”
Performance observations rely on watching an individual perform a task as unobtrusively as possible to avoid changing the performance of the person being observed. For example, experienced graduate student researchers may be observed in a lab setting to identify the best practices and efficiencies they use. Less experienced students may also be observed in the same setting to identify differences. Wherever possible, observations should be conducted with a checklist or protocol in hand to achieve consistent observations. They should always be concluded with a debrief between the observer and the observed to better understand what was going through the mind of the observed person while performing the task (Watkins et al. 2012).
Job/Task Analyses
A job or task analysis seeks to accomplish the same thing as a performance observation—defining how a task is done and how it ought to be done. It differs from a performance observation in that it is conducted like an interview. The needs assessor does not actually watch the job or task being performed. For this reason, it tends to focus more on ideal performance—defining what should be. The needs assessor usually diagrams a task from start to finish using flow-charting techniques to create a type of if/then decision tree representing the task (Dick et al., 2015).
A task analysis becomes useful in several situations. It creates a useful framework for developing training and assessment. It also works well when observation is difficult for safety or privacy reasons. For example, a needs assessor focusing on student mental health counseling might choose to interview counselors about their processes rather than observe the process due to confidentiality concerns. Helpfully, a task analysis produces a document that other experts can review and provide feedback on. However, task analysis usually takes more time to develop than observing the task itself (Watkins et al., 2012).
Individual Interviews, Critical Incident Reviews, and Expert Reviews
Conducting interviews with students, employees, and other stakeholders allows the needs assessor to have an in-depth and focused discussion on specific topics. Interviews are excellent for documenting stories and context. They also take a lot of time and can be difficult to analyze when there are contradictory views. Some tips for successful interviews include having a protocol, some predetermined (nonleading) questions, a release form, and a comfortable location (Watkins et al., 2012).
A method that can be used during an interview (or a focus group, for that matter) is a critical incident review. This involves the interviewer asking the interviewee to recall past events as examples and explore the conditions, context, activities, and results associated with the incident.
A needs assessor may also invite outside experts to be interviewed. This is a useful way to get informed perspectives on institutional needs from someone who is not so close to the problem to have a tainted perspective. Information from outside experts has a way of increasing buy-in from stakeholders. Conversely, outside experts’ lack of internal knowledge may limit the credibility of their recommendations. If using this technique, assessors should try to follow a standard protocol and recruit experts who do not have personal agendas (Watkins et al., 2012).
Focus Group Interviews
One of the most common qualitative needs assessment techniques is the focus group interview, in which a small group is brought together in a facilitated format to discuss a series of questions. It has the advantage of interviewing multiple people simultaneously, thus saving time while allowing participants to build on each other’s ideas and reactions and perhaps even reach a consensus. However, focus groups can present challenges. Strong-minded individuals can skew the apparent group outcomes in directions that deviate from what individuals may actually think and feel. Groupthink is always a risk. There may be unequal participation, and some people may not share sensitive information or views. Group tangents can frequently prevent all questions from being addressed. In short, a good focus group requires a skilled facilitator (Watkins et al., 2012).
Focus groups are more successful when they have an identified purpose, agenda, guide, and/or protocol. Information requirements should be prioritized, so the facilitator starts the discussion with the highest priority items. Likewise, the highest-priority participants should be identified, with the meeting scheduled to fit their availability.
Some general recommendations for facilitation are to select a decision-making technique for the group, enforce confidentiality, allow group members time to think, and regularly report back what is heard for clarification. If focus group members are not letting others participate, they can be asked to leave.
To aid in later focus group analysis, one of the researchers should write down observations of group dynamics in addition to the actual content of what is said. The session should also be recorded with permission in the form of a consent or release form.
A focus group should be paired with a survey so that participants can answer questions individually, away from group influence. A survey before the focus group can prime thought processes in advance, while a survey after can prompt discussion-informed reflection. Assessors may consider applying a pre- and post-focus group survey, if feasible (Watkins et al., 2012).
SWOT Analysis
The SWOT analysis is a popular focus-group method. SWOT is an acronym for strengths, weaknesses, opportunities, and threats. Each of these are categories for describing an organization’s context, operations, and future. Focus group participants are asked to brainstorm together what the institution’s strengths and weaknesses are and then identify potential opportunities for growth and success as well as existential threats. These can be mapped on a matrix to show their relationship one to another. A SWOT analysis is not as directly related to needs identification as the other techniques discussed in this chapter, but it can provide a useful supplement to a needs analysis—particularly when it comes to prioritizing needs. When used as a technique, it should be coupled with other methods (Watkins et al., 2012).
Delphi Technique
The Delphi technique has advantages in achieving consensus among experts who are separated by time and distance. It is similar to the nominal group technique but can be done over email. The technique works as follows (Watkins et al., 2012; Altschuld & King, 2010). An expert panel of 30–50 participants is selected and invited (with clear expectations for time and commitment), and a short questionnaire is administered to all participants. It can ask just a single question. Responses are received and coded into a single list. The compiled list is sent out as a second questionnaire, in which participants are asked to rate each listed element in terms of importance or relevance. Results are tabulated again, and the mean, median, mode, and interquartile range are calculated to determine consensus. A third questionnaire is then put together with the items of most importance from the second questionnaire. Again, expert participants are asked to rate each element. This process is repeated until a consensus is reached. Most of the time, a consensus is reached after about four rounds. Of course, the final consensus is shared with the panelists, who will likely be curious.
Success with the Delphi technique involves providing incentives, ensuring commitment from expert panelists, getting an endorsement from an influential person, and staying closely involved with participants throughout the process.
Integrating Your Findings
By now, it is hopefully clear that good data collection and analysis includes information gathered from preexisting sources plus information gathered through multiple appropriate techniques—ideally combining quantitative and qualitative data. Major trends, themes, and needs should have emerged from the analysis of each of these data sources, all of which should be compared and combined to facilitate a unified understanding of major institutional needs.
Prioritizing, Deciding, and Recommending
It may be tempting to call the needs assessment done at this point. But there is one more important step to complete. It is now time to make decisions. These decisions include prioritizing needs, identifying potential solutions, and determining recommendations. This step should also be collaborative, involving key stakeholders. It is, in fact, the part of needs assessment in which the coordinator will most often exercise skills in negotiation, presentation, and agreement arbitration.
In the Altschuld three-phase model, the decision step entails much of Phase III. You may recall from our earlier overview of the three-phase model that membership on the needs assessment committee may change at this phase. Some researchers may leave, and some decision-makers may enter. However, replacing nearly everybody on the committee would fall short of being a good idea. The committee should keep a critical mass of people familiar with the process and decisions made to this point to avoid a complete change of direction.
Altschuld (2010) notes that the needs assessment coordinator may need to put forth a concerted effort in Phase III to reach out to committee members and keep them engaged—especially if the needs assessment has taken some time or there have been organizational changes. The history of needs assessment is scattered with examples of efforts that failed due to lost energy, funding, or organizational support. Momentum is key. It is maintained through regular stakeholder contact, regular meetings, and processes that do not require any more time and effort than what is needed to make a decision.
Decision-Making Methods
Ultimately, the trick to prioritizing needs, identifying potential solutions, and making recommendations is to bring a group of decision-makers and experts together to discuss options and reach a consensus. Any number of methods can be used to achieve this goal, some of which I will discuss. The key is to find the methods that match the circumstance.
Nominal Group Technique
The nominal group technique can work with groups of about 3–30 people. It typically occurs in-person. The group facilitator gives everybody a single topic and asks individuals to write down their thoughts. Each group member shares a single response, which is written on a board or flip chart. Group members are not allowed to challenge or disagree with responses at this point. The second and third responses are then shared. Each contribution is assigned a letter. Finally, group members are asked to identify the top five or so examples that seem most important and rank them in order of priority. This is usually done using index cards. The facilitator reads through the list of ideas, and group members share the rank they gave each idea. The facilitator then adds up the rankings to identify the top priorities. Ranking and scoring can occur for two or three more rounds for long lists (Watkins et al., 2012; Altschuld & King, 2010).
Multi-Criteria Analysis
Higher-education professionals may recognize the multicriteria analysis approach as a common method for hiring or vendor selection committees. For this technique, group members are given a rubric that lists various selection criteria along the column headers. Each row lists an alternative need, solution, or idea. Group members are asked to score each item for each of the criteria. Scores are added up and sometimes averaged, first with individuals and then as a group to select the item of highest priority or interest. This technique allows the assessors to add extra weight to certain important variables such as cost. As a drawback, this technique can also be easily manipulated. A key to success is to make sure rating criteria are clear and to avoid getting carried away with too many criteria (Watkins et al., 2012).
Pairwise Comparison
A pairwise comparison is a simple way to narrow the options to be considered to a set of agreed-upon criteria. It involves, essentially, looking at two competing options and choosing the preferable option, then repeating this process until every option has been compared with every other option and a winner has been selected in each case. The group facilitator then tallies up the number of times each option was selected and lists them in order of most selected to least selected. The group then discusses whether this is an acceptable prioritization (Watkins et al., 2012).
2×2 Matrix
The 2×2 matrix approach is a way of evaluating priorities across different populations. It involves labeling two columns as “High” and “Low” priorities for one audience and then labeling two rows as “High” and “Low” priorities for another audience, as shown in Figure 5.3 (Watkins et al., 2012).
Figure 5.3
2×2 Matrix Example (Hypothetically Crafted on Stereotypes)
High priorities for students | Low priorities for students | |
---|---|---|
High priorities for instructors | Good student grades | Reading the syllabus |
Low priorities for instructors | Fast turnaround for grades | Reading emails |
This method can be a useful way to check that a needs analysis has not lost its focus on an important audience.
Scenario Analysis
Scenario analysis involves discussing as a group the benefits and risks of alternative (ideally prewritten) scenarios. Facilitators can spend time with the group exploring both optimistic and pessimistic outlooks. Competing scenarios should assume the same time frame and should factor in existing information and trends as well as uncertainties and the possibility of unexpected events. Group members should be asked to rank scenarios and provide alternatives. Prewritten scenarios do not have to be selected as-is. Combinations and alternatives should be welcome (Watkins et al., 2012).
Fault Tree Analysis
A fault tree analysis is a group diagramming exercise meant to get at the root cause of results gaps. The facilitator sits with a group in front of a rather large writing space and writes down a problem or event at the top. The group then helps break down the causes of that problem or event. These are written on the chart. Then each contributing factor is broken down further until the root causes are identified. The shape of the chart will probably resemble something like a pine tree (Watkins et al., 2012).
Concept Mapping
Whereas fault tree analysis is a diagramming exercise for identifying root causes of problems, concept mapping is a diagramming exercise that can be used for defining processes or discussing potential solutions and their feasibility. The process usually begins with brainstorming. Ideas are then placed into clusters, which are labeled and related to each other. Each cluster is rated based on its feasibility or importance. The group can then identify patterns and choose which clusters to prioritize (Watkins et al., 2012).
Mixing Methods
As with data collection and analysis methods, a combination of group decision-making methods will usually be beneficial for prioritizing needs, identifying solutions, and making recommendations. Groups can be expected to meet at least three times through this process. There will be an initial working session, one or more sessions for follow-up work, and a session for finalizing decisions and recommendations. Once the group decision-making process is complete, it is up to the core assessment team to produce the needs assessment report and presentation.
The Brave New World of Big Data
The pioneering work of Altschuld, Kaufman, Watkins, and others, which I have discussed to this point, has focused largely on the use of conventional data collection techniques such as surveys, focus groups, and interviews. However, higher-education institutions increasingly operate on large electronic databases and information systems. As a result, they have a growing amount of data at their disposal. A recent study of college students by the Educational Advisory Board (2021) concluded that students are increasingly expecting their institutions to leverage this data to create better educational experiences. Institutional data includes learning behavior data, financial data, campus engagement data, and disaggregated demographic data (EAB, 2021). The value of this and other big data is that it provides information never before collected, which, in fact, could not have been collected from a survey. It includes actual behavior data—such as LMS logins and page views, facilities usage, timeliness with assignments, time spent with course content, and more—rather than reported behavior data.
Institutional data isn’t the only big data option. Researchers can also access a growing set of generalized demographic data about specific target audiences. This can include Google search data, data on social media patterns, consumer behavior, and much more. According to Stephens-Davidowitz (2017), big data presents the following four advantages:
- It offers up new types of data, such as search terms and granular behavior patterns.
- It provides more honest data—allowing us to better see what people actually do and want and not just what they say they do and want.
- It allows us to zoom in on smaller subsets of people thanks to larger amounts of more granular data.
- It allows us to perform many casual experiences, such as A/B testing on websites, where users are presented with one of two experiences, and researchers can examine which one gets more interaction.
Let us elaborate on the third advantage—that of narrowing a researcher’s focus on smaller subsets of people. A particular advantage of big data is the ability to identify and compare data doppelgangers. These are individuals who are closely matched in terms of the datasets they produce. Computer algorithms are designed to identify data doppelgängers, categorize them, and find common outcomes for each category. Algorithms can then, with a fair degree of accuracy, predict outcomes for individuals who fall into these doppelgänger categories (Stephens-Davidowitz, 2017). Many institutions now license software programs that compile university data and do this algorithmic work relative to student performance and persistence. The results are better lists of at-risk students, barrier courses, and activities that increase students’ likelihood of persistence. Similar solutions for faculty and staff, in particular, are not as prevalent, although big data analytics platforms for human resources exist.
Big data is the stuff of science fiction and the cause of either terror or elation on the part of university personnel. Institutions must commit to transparency and ethics in their use of institutional data, and needs assessors should be aware of their institutions’ data security and privacy policies and get to their institutional data analysts. The odds are good that big, aggregated, segmented, and maybe even predictive data sets are already available without too much overhead.
Wrapping Up: A Word on Evaluations and Standards
Describing a systematic process in detail, with its many options and caveats, can make the process look daunting. To conclude, then, we need to pull back from our 10x-magnification view of needs assessment and see it as a less-massive-looking part of a larger whole. To begin, remember that you will never use all the processes described in this chapter in a single needs assessment. You should only do what is necessary to make informed decisions. Next, please note that an effectively run needs assessment will likely reduce the amount of work that comes later in setting up a mentoring program. You should come away from needs assessment with a clear direction for program design and implementation and a reduced likelihood of experiencing missteps and backtracking. Lastly, needs assessment feeds directly into program evaluation (see Chapter 13). Once you have taken the time to define and prioritize needs and plan solutions accordingly, your evaluation becomes a matter of seeing whether the chosen solutions succeeded in closing the gaps you identified in the first place. Some of the same instruments used for needs assessment can even be repurposed for evaluation. Needs assessment fits the notion of building a mentoring program as a systematic rather than an ad hoc approach, and systematic approaches are nearly always more efficient, successful, and defensible.
As a final note, needs assessment fits into nearly every target of existing mentoring program standards. Chapter 13 refers to the European Mentoring and Coaching Council and its International Standards for Mentoring and Coaching Programmes (EMCC, 2022). These include clarity of purpose, stakeholder training and debriefing, a process for selecting and matching, and a process for measurement and review. The process of needs assessment clarifies purpose, involves stakeholders from the start, and helps with the selection of target audiences and effective processes. It sets the stage for effective program evaluation.
References
Altschuld, J. W. (2010). Needs assessment: Phase II: collecting data. Sage.
Altschuld, J. W., & Eastmond, J. N. (2010). Needs assessment: Phase I: getting started. Sage.
Altschuld, J. W., & King, J. A. (2010). Needs assessment: Phase III: taking action for change. Sage.
Altschuld, J. W. & Kumar, D. D. (2010). Needs assessment: An overview. Sage.
Altschuld, J. W., & Watkins, R. (2014). A final note about improving needs assessment research and practice. In J. W. Altschuld & R. Watkins (Eds.), Needs assessment: Trends and a view toward the future. American Evaluation Association.
Altschuld, J. W., & White, J. L. (2010). Needs assessment: Analysis and prioritization. Sage.
American Educational Research Association (AERA), American Psychological Association (APA), & National Council on Measurement in Education (NCME). (2014). Standards for educational and psychological testing. American Educational Research Association.
Barbazette, J. (2006). Training needs assessment: Methods, tools, and techniques. Pfeiffer.
Block, P. (2000). Flawless consulting (2nd ed.). Jossey-Bass/Pfeiffer.
Bradshaw, J. (1972). The concept of social need. New Society, 30, 640–643.
Dick, W., Carey, L., & Carey, J. O. (2015). The systematic design of instruction (8th ed.). Pearson.
Dilalla, D. L., & Dollinger, S. J. (2006). Cleaning up data and running preliminary analysis. In F. T. L. Long & J. T. Austin (Eds.), The psychology research handbook: A guide for graduate students and research assistants (2nd ed., pp. 241–253). Sage.
Educational Advisory Board (EAB). (2021). Data priorities for student success: Four case studies to inspire and advance your success analytics. EAB. Retrieved May 4, 2022, from https://eab.com/insights/blogs/operations/next-gen-success-analytics/
European Mentoring and Coaching Council (EMCC) (n.d.). ISMCP standards. EMCC Global. Retrieved April 15, 2022, from https://www.emccglobal.org/accreditation/ismcp/standards/
Family Education Rights and Privacy Act, 20 U.S.C. § 1232g; 34 CFR Part 99. (1974).
Kaufman, R. (1982). A needs assessment primer. Training and Development Journal, 41(10)
Kaufman, R. (1992). Strategic planning plus: An organizational guide. Thousand Oaks, CA: Sage.
Kaufman, R., & Christensen, B. D. (2019). Needs assessment: Three approaches with one purpose. Performance Improvement, 58(3), 28–33.
Kaufman, R., & Guerra-Lopez, I. (2013). Needs assessment for organizational success. Association for Talent Development.
Kuh, G. D. (2008). High impact educational practices: What they are, who has access to them, and why they matter. Association of American Colleges and Universities.
McGoldrick, B., & Tobey, D. (2016). Needs assessment basics (2nd ed.). ATD Press.
Patten, M. L. (2009). Understanding research methods: An overview of the essentials (7th ed.). Pyrczak.
Scriven, M., & Roth, J. (1990). Needs assessment: Concepts and practice. Reprint in Evaluation Practice, 11(2), 135–144.
Stefaniak, J. E. (2021). Determining environmental and contextual needs. In J. K. McDonald & R. E. West (Eds.), Design for learning: Principles, processes, and praxis. EdTech Books. https://edtechbooks.org/id/needs_analysis
Stephens-Davidowitz, S. (2017). Everybody lies: Big data, new data, and what the internet reveals about who we really are. Harper Collins.
Watkins, R., & Kavale, J. (2014). Needs: Defining what you are assessing. In J. W. Althschuld & R. Watkins (Eds.), Needs assessment: Trends and a view toward the future. American Evaluation Association.
Watkins, R., Meiers, M. W., & Visser, Y. L (2012). A guide to assessing needs: Essential tools for collecting information, making decisions, and achieving development results. World Bank.
Witkin, B. R. (1984). Assessing needs in educational and social programs: Using information to make decisions, set priorities, and allocate resources. Josey-Bass.
Wulder, M. (2005). A practical guide to the use of selected multivariate statistics. Canadian Forest Service Public Forestry Centre. http://dx.doi.org/10.13140/RG.2.1.1544.6566