<span class='p-name'>Development & Validation of the TILT Survey</span>

Development & Validation of the TILT Survey

In an earlier post, I shared some of the thinking behind the development and validation of the Technology, Instruction, Learning in Teaching (TILT) survey. In this post, I’ll share some of the steps taken in the content validation process.

Development of the survey

When you develop a survey, you need to go through a process as you create the items on your survey, and then make sure you’re measuring what you think you’re measuring. A survey is a method of gathering information from a sample of people. In some cases you will see words used like instrument, quiz, and test.

In the case of the TILT, this meant that we went through the following stages:

Domain Identification

We began by first identifying what we wanted to measure in our questions. What is in…and out…of the scope of our survey?

In a survey, you’re measuring constructs, or the abstract idea, underlying theme, or subject matter that one wishes to measure using survey questions. So, we needed to talk as a group to agree on what, how, and why we wanted to measure our identified domain.

The questions we had about the TILT were broad in scope. Did we want to measure faculty technology usage? Did we want to measure student technology usage? Did we want to measure staff technology usage? Were we interested in digital literacy, technology usage, instructional technology usage, or educational technology usage? How would we administer the survey (get the survey to people and have them take it)? What type of items would we include (Likert/scale, open response)

For our purposes, we wanted to measure what instructional technologies faculty were using in their classes, and what instructional technologies students were seeing and using. We decided that we were interested in staff usage, but figured that would mean that we would need to develop three surveys instead of two.

We decided that we wanted to focus on instructional technologies as opposed to digital literacy, and a broader educational technology after a content analysis, review of research, direct observations, expert judgment, and consideration of our own instruction. It should be noted that the name of our research group, and the focus of the institution was all about digital literacy, but we felt that the faculty and staff were nowhere near literacy practices in digital spaces, we were still building capacity to use technology in instruction.

Lastly, we decided that we would develop two surveys and administer them using online testing software. We would primarily use Likert-style questions to make it easier for participants to quickly respond. We would also include several open response items to obtain deeper, richer data.

Item Generation

After we identified and agreed upon the domain of focus for the surveys, we started generating items or questions. Thankfully we had an instrument that I used in previous research and used these items as a starting point.

We worked to ensure that all items were related to the focus of the research. But, we decided to keep some content that was tangential or unrelated to the core constructs. In other words, we did not hesitate to have a couple of items on the survey that did not perfectly fit the identified domains.

In the item development, we focused on the wording of the items. This means that we focused on language that was simple and unambiguous. Items should not be offensive or potentially biased in terms of social identity, i.e., gender, religion, ethnicity, race, economic status, or sexual orientation. Lastly, we wanted to make sure the individual items did not take too long to read, and the overall survey wasn’t too long as well.

We followed Fowler’s five essential characteristics of items required to ensure the quality of construct measurement:

  • the need for items to be consistently understood;
  • the need for items to be consistently administered or communicated to respondents;
  • the consistent communication of what constitutes an adequate answer;
  • the need for all respondents to have access to the information needed to answer the question accurately;
  • and the willingness for respondents to provide the correct answers required by the question at all times.

Content Validity

After we completed the development of the faculty and student surveys, we sent them out to experts for review. Basically we needed to send this to others with expertise in the area to make sure we’re measuring what we want to measure.

Expert are highly knowledgeable about the domain of interest and/or scale development. This should also include individuals that are in the target population, or people that will have to take the survey. With the TILT, we had the surveys reviewed by a sample of faculty and students at the institution that we’re knowledgeable about instructional technologies. Faculty that were leaders in the online programs at the institution reviewed the faculty survey. Students in my edtech course reviewed the student survey.

To make it easier to review the survey, we used a Google Doc to share items and collect feedback.

We used the Google Docs to assess content validity through the Delphi method to come to a consensus on which questions are a reflection of the constructs we wanted to measure. The Delphi method is a technique “for structuring group communication process so that the process is effective in allowing a group of individuals, as a whole, to deal with a complex problem.”

Content experts were also asked to make judgements about the face validity of the survey. Face validity is the “degree that respondents or end users [or lay persons] judge that the items of an assessment instrument are appropriate to the targeted construct and assessment objectives.”

Next Steps

After these steps were completed, the research team spent several meetings going through the feedback to revise the instruments. We modified items, word choice, and the organization of the survey.

There were many, many heated discussions about almost all aspects of the surveys. At the end of the day, we all realized that at some point we needed to develop a final version that could be administered to faculty and students.

In this, there was never a point that the surveys would be perfect. There would always be items that could be worded or scored better. The research team needed to agree on all changes before the final versions were approved and built into the online survey tool.

Photo by Alex Kotliarskyi on Unsplash

Leave A Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.