TCA Lectures What do you mean by research tool, discuss different research tools? What is meant by validity and reliability of research tools? BEd

TCA Lectures What do you mean by research tool, discuss different research tools? What is meant by validity and reliability of research tools? BEd

If you want to view other related topics. Click Here.

Answer:

Questionnaires and Surveys

What is Reliability?

Reliability is a measure of the stability or consistency of test scores. You can also think of it as the ability for a test or research findings to be repeatable. For example, a medical thermometer is a reliable tool that would measure the correct temperature each time it is Surveys and questionnaires provide social researchers with quantifiable information. Information is obtained through face-to-face interviews, anonymous response surveys or questionnaires or via telephone, computerized kiosks or email. Researchers administer surveys and questionnaires to target groups to get specific information within a specified period of time.
This type of research tool is effective in finding out cultural attitudes and opinions in particular regions or towns, where research journals may not have specific and accurate information. 

Statistics and Data Analysis

Statistics and data analysis is an essential research tool. Information obtained from questionnaires and surveys can be compiled into statistics. Once compiled into statistical data, researchers can devise theories and look for patterns that may explain or confirm their research.

Overview of Reliability and Validity

Outside of statistical research, reliability and validity are used interchangeably. For research and testing, there are subtle differences. Reliability implies consistency: if you take the ACT five times, you should get roughly the same results every time. A test is valid if it measures what it’s supposed to. Tests that are valid are also reliable. The ACT is valid (and reliable) because it measures what a student learned in high school. However, tests that are reliable aren’t always valid. For example, let’s say your thermometer was a degree off. It would be reliable (giving you the same results each time) but not valid (because the thermometer wasn’t recording the correct temperature).used. In the same way, a reliable math test will accurately measure mathematical knowledge for every student who takes it and reliable research findings can be replicated over and over. 
Of course, it’s not quite as simple as saying you think a test is reliable. There are many statistical tools you can use to measure reliability. For example:
  • Kuder-Richardson 20: a measure of internal reliability for a binary test (i.e. one with right or wrong answers).
  • Cronbach’s alpha: measures internal reliability for tests with multiple possible answers.

Internal vs. External Reliability

Internal reliability, or internal consistency, is a measure of how well your test is actually measuring what you want it to measure. External reliability means that your test or measure can be generalized beyond what you’re using it for. For example, a claim that individual tutoring improves test scores should apply to more than one subject (e.g. to English as well as math). A test for depression should be able to detect depression in different age groups, for people in different socio-economic statuses, or introverts.
One specific type is parallel forms reliability, where two equivalent tests are given to students a short time apart. If the forms are parallel, then the tests produce the same observed results. 

The Reliability Coefficient

A reliability coefficient is a measure of how well a test measures achievement. It is the proportion of variance in observed scores (i.e. scores on the test) attributable to true scores (the theoretical “real” score that a person would get if a perfect test existed).
The term “reliability coefficient” actually refers to several different coefficients: Several methods exist for calculating the coefficient include test-retest, parallel forms and alternate- form:
  • Cronbach’s alpha — the most widely used internal-consistency coefficient.
  • A simple correlation between two scores from the same person is one of the simplest
ways to estimate a reliability coefficient. If the scores are taken at different times, then this is one way to estimate test-retest reliability; Different forms of the test given on the same day can estimate parallel forms reliability.
  • Pearson’s correlation can be used to estimate the theoretical reliability coefficient between parallel tests.
  • The Spearman Brown formula is a measure of reliability for split-half tests.
  • Cohen’s Kappa measures interrater reliability.
The range of the reliability coefficient is from 0 to 1. Rule of thumb for preferred levels of the
coefficient:
  • For high stakes tests (e.g. college admissions), > 0.85. Some authors suggest this figure should be above 0.90.
  • For low stakes tests (e.g. classroom assessment), > 0.70. Some authors suggest this figure should be above 0.80

What is Validity?

Validity simply means that a test or instrument is accurately measuring what it’s supposed
to:
• Composite Reliability
• Concurrent Validity.
• Content Validity.
• Convergent Validity.
• Consequential Validity.
• Criterion Validity.
• Curricular Validity and Instructional Validity.
• Ecological Validity.
• External Validity.
• Face Validity.
• Formative validity & Summative Validity.
• Incremental Validity
• Internal Validity.
• Predictive Validity.
• Sampling Validity.
• Statistical Conclusion Validity.
*********************************************************************************

Post a Comment

Previous Post Next Post