You are here Help! Validated Instruments Study instruments and tools must be provided at the time of protocol submission. Internal validity refers to how well your experiment is free of outside influence that could taint its results.
Because the grades on a test will vary within different age brackets, a valid instrument should control for differences and isolate true scores. Protect external validity. External validity refers to how well your study reflects the real world and not just an artificial situation. An instrument may work perfectly with a group of white male college students but this does not mean its results are generalizable to children, blue-collar adults or those of varied gender and ethnicity.
For an instrument to have high external validity, it must be applicable to a diverse group of people and a wide array of natural environments.
To accomplish this, we need to understand the processes and routines used at the organizational level. The Canadian Health Services Research Foundation has conceptualized 'organizational research use' as an iterative process that involves acquiring, assessing, adapting, and applying research evidence to inform health system decisions.
To improve evidence-informed decision-making at this broader level requires a better understanding of the processes and routines related to the use of health services research in an organization. In other words, the commitment to evidence-informed decision-making first requires taking stock of facilitators and challenges facing those who could potentially use evidence to make decisions. By taking stock, concrete ideas can be developed to support the acquisition, assessment, adaptation, and application of research findings.
Thus, the foundation's vision of an organization that uses research is one that invests in people, processes, and structures to increase their capacity to use research. The purpose of this paper is to describe the response variability, differentiability, and usability of a self-assessment tool for organizations to evaluate their ability to use research findings. The mission of the foundation is to support evidence-informed decision-making in the organization, management, and delivery of health services through funding research, building capacity, and transferring knowledge.
The implementation of evidence-informed decision-making in health care organizations is unlikely to follow the clinical model of evidence-based medicine. Individuals cannot adopt or implement research findings on their own; they require organizational support and resources. To illustrate, in one study, the characteristics of research per se did not fully explain the uptake of research findings whereas users' adoption of research, users' acquisition efforts, and users' organizational contexts were found to be good predictors of the uptake of research by government officials in Canada [ 9 ].
Further, empirical work in the field of organization and management clearly shows that successful individual adoption is only one component of the assimilation of innovations in healthcare organizations [ 10 ]. Yet, studies of individuals as adopters of research have generally not addressed the potential role of organizational elements that could be harnessed to influence the adoption process [ 11 ].
Recent frameworks related to the implementation of research or innovations are beginning to consider those organizational elements that act as barriers or facilitators to the uptake and use of research by individuals [ 12 - 14 ]. Authors have discussed the importance of such things as organizational structural features, culture and beliefs, leadership style, and resources described in more detail below. Of note is that some of these frameworks collapse the distinction among the different types of decision-makers who might be supported in the use of research; we also took this generic approach when we evaluated the 'Is research working for you' tool in various settings.
Studies have demonstrated associations among organizational variables and the diffusion of innovations e. Systematic reviews have identified some organizational features that are implicated in the successful assimilation of an innovation.
Structural determinants, such as large organizational size and decentralized decision-making processes, were found to be significantly associated with the adoption of innovations [ 15 , 16 ]. Organizational complexity, indicated by specialization, professionalism, and functional differentiation, were also associated with innovation diffusion [ 17 ].
Resources and organizational slack are needed to introduce and support new innovations, as well as to provide monetary reimbursement for those professionals or their organizations that incorporate innovations into their routines [ 15 , 18 ].
There are also two non-structural determinants that have an impact on what is called organizational innovativeness: absorptive capacity and receptive context for change [ 15 ]. The organization's capacity to absorb innovation is its ability to acquire, assimilate, transform, and exploit new knowledge; to link it with its own prior related knowledge; and to facilitate organizational change [ 19 ].
Thus, an organization that supports and encourages innovation, data collection and analysis, and critical appraisal skills among its members will be more likely to use and apply research evidence [ 20 ].
The receptive context for change refers to the organization's ability to assimilate innovations by providing strong leadership, clear strategic vision, and possibility for experimentation. While it is difficult to draw definitive conclusions from primary innovation studies due to their methodological weaknesses [ 18 ], it does seem to be the case that the user's system or the organizational context seems to be one of the major determinants that affects the assessment, interpretation, and utilization of research.
These findings imply the need to commit organizational resources to ensure successful adoption of research findings for effective decision-making by the individual within the organization [ 21 , 22 ]. Resources need to be accompanied by strategies that will go beyond the individual and consider the collective for a culture of evidence-informed decision-making.
One promising view of how organizations should effectively learn and manage knowledge, 'learning organizations' [ 23 ], may be helpful for enabling the use of research in decision-making. Learning organizations are characterised as organizations that stimulate continuous learning among staff through collaborative professional relationships across and beyond organizational levels. Moreover, individual goals are aligned with organizational goals, and staff is encouraged to participate in decision-making, which in turn promotes an interest in the future of the organization [ 23 ].
Another pertinent perspective is Nonaka's theory of collective knowledge creation [ 24 ]. Through 'fields of interactions', individuals exchange and convert explicit and tacit knowledge, thereby creating new collective organizational understandings. Both learning organizations and the theory of knowledge creation emphasize the need for on-going social interactions in order for knowledge to spread from the individual user to groups of users, which in turn can affect organizational structures and processes.
Decision-makers can increase their ability to identify and assess new knowledge generated from research activities and use that knowledge to enhance their organizational capabilities. A first step in this change process is to examine an organization's capacity to access, interpret, and absorb research findings. The self-assessment tool 'Is research working for you? A self-assessment tool and discussion guide for health services management and policy organizations' was developed by the Canadian Health Services Research Foundation and colleagues in response to requests for assistance from Canadian health service delivery organizations in identifying their organization's strengths and weaknesses in evidence-informed decision-making.
The tool was designed to help organizations examine and understand their capacity to gather, interpret, and use research evidence.
Accordingly, in this paper, we are narrowly defining 'evidence' to mean scientific findings, from research studies, that can be found in the academic literature and in the unpublished literature e. Development of the tool involved an iterative process of brainstorming, literature reviews, focus groups, evaluations of use, and revisions. Development started in with the first version of the self-assessment tool that was informed by a review of the health literature on the major organizational capabilities for evidence-informed decision-making [ 25 ].
The result was a short, 'self-audit' questionnaire that focused on accessing, appraising, and applying research. In , the questionnaire was revised based on review of the business literature that encompassed topics such as organizational behaviour and knowledge management [ 26 ].
As a result, the questionnaire's three A's accessing, appraising, and applying were supplemented with another A — adapting. Focus groups with representatives from regional health authorities, provincial ministries of health, and health services executives provided feedback on the strengths and weaknesses of the instrument. Adjustments to the wording of items on the tool were made based on focus group input. Further, revisions reflected the need to create a group response with representatives from across the levels of the organization because both literature reviews and focus groups clearly indicated that while evidence-informed decision-making was often portrayed as a discrete event, it is in fact a complex process involving many individuals.
The tool itself is organized into four general areas of assessment. Acquire: can your organization find and obtain the research findings it needs? Assess: can your organization assess research findings to ensure they are reliable, relevant, and applicable to you? Adapt: can your organization present the research to decision makers in a useful way?
Apply: are there skills, structures, processes, and a culture in your organization to promote and use research findings in decision-making? Each of these areas contains a number of items. For example, under 'acquire', users are asked to determine if 'we have skilled staff for research.
An earlier version of the tool was used for this study; the revised, current version of the tool can be obtained by sending a request to ac. The research objectives were to: determine whether the tool demonstrated response variability; describe how the tool differentiated between organizations that were known to be, a priori , lower-end or higher-end research users; and describe the potential usability of the tool within selected organizations in four health sectors. A mixed methods study design was used.
Focus groups provided a rich source of qualitative data, while participants' responses to the tool yielded quantitative data. Focus groups were conducted among four sectors of Canadian health organizations: selected branches of federal government, long-term care organizations, non-governmental organizations, and community-based organizations.
Key advisors actively involved in each of the sectors identified organizations that were expected to be higher-end versus lower-end research users. With respect to public health as part of community-based organizations , university-affiliated health units in Ontario were categorized as higher-end research users and all other health units were categorized as lower-end research users.
The original aim was to recruit 40 organizations; ten from each of the four sectors. Not all organizations were invited to participate: once it became clear that organizations in a sector were interested and that we were approaching or had approached our sample size goal, we stopped inviting new organizations.
To recruit participants, an e-mail was sent to the contact person in a randomly selected organization within each sector. They were asked to participate in a two-hour focus group on-site. A pre-determined leader from their group explained the procedures, and managed the first hour of the focus group.
Participants were asked to work through the tool as if at a regular organizational meeting. They individually completed the tool sometimes in advance of the meeting and then they discussed the items and their rankings, and in most cases derived a group consensus ranking on items. The research team facilitator was present for the first hour of the focus group but did not contribute unless clarification about the procedures was required.
In the second hour, the research team facilitator posed questions, asking group members to discuss overall impressions of the tool, identify insights that emerged during the review of items on the tool, and comment on areas of research utilization and capacity that may not have been addressed. Facilitators and note-takers produced a debriefing note after each session.
All sessions were tape recorded and transcribed with the consent of participants. Respondents were asked to return copies of their completed tools to the research team. They were given these instructions either at the end of the focus group session or several weeks following the focus group. Alternatives are Taguchi methods and robust tolerance analysis. It includes the identification of possible failure modes, determination of the potential causes and consequences and an analysis of the associated risk.
It also includes a record of corrective actions or controls implemented resulting in a detailed control plan. FMEAs can be performed on both the product and the process. Typically an FMEA is performed at the component level, starting with potential failures and then tracing up to the consequences. This is a bottom-up approach. A variation is a Fault Tree Analysis, which starts with possible consequences and traces down to the potential causes.
This is the top-down approach. An FMEA tends to be more detailed and better at identifying potential problems. However, a fault tree analysis can be performed earlier in the design process before the design has been resolved down to individual components.
See FMEA for a comparison. Alternatives are to perform capability studies and analysis of means on measurement device. Mistake Proofing Methods — Mistake proofing refers to the broad array of methods used to either make the occurrence of a defect impossible or to ensure that the defect does not pass undetected. The Japanese refer to mistake proofing as Poka-Yoke. The general strategy is to first attempt to make it impossible for the defect to occur.
For example, to make it impossible for a part to be assembled backward, make the ends of the part different sizes or shapes so that the part only fits one way. If this is not possible, attempt to ensure the defect is detected. This might involve mounting a bar above a chute that will stop any parts that are too high from continuing down the line. Other possibilities include mitigating the effect of a defect seatbelts in cars and to lessen the chance of human errors by implementing self-checks.
Multi-Vari Chart — Graphical procedure for isolating the largest source of variation so that further efforts concentrate on that source. Response Surface Study — A response surface study is a special type of designed experiment whose purpose is to model the relationship between the key input variables and the outputs. Performing a response surface study involves running the process at different settings for the inputs, called trials, and measuring the resulting outputs.
An equation can then be fit to the data to model the effects of the inputs on the outputs. This equation can then be used to find optimal targets using robust design methods and to establish targets or operating windows using a tolerance analysis. The number of trials required by a response surface study increases exponentially with the number of inputs.
It is desirable to keep the number of inputs studied to a minimum. However, failure to include a key input can compromise the results. To ensure that only the key input variables are included in the study, a screening experiment is frequently performed first.
Robust Design Methods — Robust design methods refers collectively to the different methods of selecting optimal targets for the inputs. Generally, when one thinks of reducing variation, tightening tolerances comes to mind. However, as demonstrated by Taguchi, variation can also be reduced by the careful selection of targets.
When nonlinear relationships between the inputs and the outputs, one can select targets for the inputs that make the outputs less sensitive to the inputs. The result is that while the inputs continue to vary, less of this variation is transmitted to the output causing the output to vary less. Reducing variation by adjusting targets is called robust design.
In robust design, the objective is to select targets for the inputs that result in on-target performance with minimum variation. Several methods of obtaining robust designs exist including robust tolerance analysis, dual response approach and Taguchi methods. Robust Tolerance Analysis — One of three approaches to robust design.
Requires estimates of the amounts that the inputs will vary during long-term manufacturing. Alternatives are Taguchi methods and the dual response approach.
Screening Experiment — A screening experiment is a special type of designed experiment whose primary purpose is to identify the key input variables. Screening experiments are also referred to as fractional factorial experiments or Taguchi L-arrays. Performing a screening experiment involves running the process at different settings for the inputs, called trials, and measuring the resulting outputs.
From this, it can be determined which inputs affect the outputs. Screening experiments typically require twice as many trials as input variables. For example, 8 variables can be studied in 16 trials. This makes it possible to study a large number of inputs in a reasonable amount of time. Starting with a larger number of variables reduces the chances of missing an important variable. Frequently a response surface study is performed following a screening experiment to gain further understanding of the effects of the key input variables on the outputs.
Taguchi Methods — One of three approaches to robust design. Involves running a designed experiment to get a rough understanding of the effects of the input targets on the average and variation. Similar to the dual response approach except that while the study is being performed, the inputs are purposely adjusted by small amounts to mimic long-term manufacturing variation.
Alternatives are the dual response approach and robust tolerance analysis. Tolerance Analysis — Using tolerance analysis, operating windows can be set for the inputs that ensure the outputs will conform to requirements.
0コメント