• Privacy Policy

Buy Me a Coffee

Research Method

Home » Questionnaire – Definition, Types, and Examples

Questionnaire – Definition, Types, and Examples

Table of Contents

Questionnaire

Questionnaire

Definition:

A Questionnaire is a research tool or survey instrument that consists of a set of questions or prompts designed to gather information from individuals or groups of people.

It is a standardized way of collecting data from a large number of people by asking them a series of questions related to a specific topic or research objective. The questions may be open-ended or closed-ended, and the responses can be quantitative or qualitative. Questionnaires are widely used in research, marketing, social sciences, healthcare, and many other fields to collect data and insights from a target population.

History of Questionnaire

The history of questionnaires can be traced back to the ancient Greeks, who used questionnaires as a means of assessing public opinion. However, the modern history of questionnaires began in the late 19th century with the rise of social surveys.

The first social survey was conducted in the United States in 1874 by Francis A. Walker, who used a questionnaire to collect data on labor conditions. In the early 20th century, questionnaires became a popular tool for conducting social research, particularly in the fields of sociology and psychology.

One of the most influential figures in the development of the questionnaire was the psychologist Raymond Cattell, who in the 1940s and 1950s developed the personality questionnaire, a standardized instrument for measuring personality traits. Cattell’s work helped establish the questionnaire as a key tool in personality research.

In the 1960s and 1970s, the use of questionnaires expanded into other fields, including market research, public opinion polling, and health surveys. With the rise of computer technology, questionnaires became easier and more cost-effective to administer, leading to their widespread use in research and business settings.

Today, questionnaires are used in a wide range of settings, including academic research, business, healthcare, and government. They continue to evolve as a research tool, with advances in computer technology and data analysis techniques making it easier to collect and analyze data from large numbers of participants.

Types of Questionnaire

Types of Questionnaires are as follows:

Structured Questionnaire

This type of questionnaire has a fixed format with predetermined questions that the respondent must answer. The questions are usually closed-ended, which means that the respondent must select a response from a list of options.

Unstructured Questionnaire

An unstructured questionnaire does not have a fixed format or predetermined questions. Instead, the interviewer or researcher can ask open-ended questions to the respondent and let them provide their own answers.

Open-ended Questionnaire

An open-ended questionnaire allows the respondent to answer the question in their own words, without any pre-determined response options. The questions usually start with phrases like “how,” “why,” or “what,” and encourage the respondent to provide more detailed and personalized answers.

Close-ended Questionnaire

In a closed-ended questionnaire, the respondent is given a set of predetermined response options to choose from. This type of questionnaire is easier to analyze and summarize, but may not provide as much insight into the respondent’s opinions or attitudes.

Mixed Questionnaire

A mixed questionnaire is a combination of open-ended and closed-ended questions. This type of questionnaire allows for more flexibility in terms of the questions that can be asked, and can provide both quantitative and qualitative data.

Pictorial Questionnaire:

In a pictorial questionnaire, instead of using words to ask questions, the questions are presented in the form of pictures, diagrams or images. This can be particularly useful for respondents who have low literacy skills, or for situations where language barriers exist. Pictorial questionnaires can also be useful in cross-cultural research where respondents may come from different language backgrounds.

Types of Questions in Questionnaire

The types of Questions in Questionnaire are as follows:

Multiple Choice Questions

These questions have several options for participants to choose from. They are useful for getting quantitative data and can be used to collect demographic information.

  • a. Red b . Blue c. Green d . Yellow

Rating Scale Questions

These questions ask participants to rate something on a scale (e.g. from 1 to 10). They are useful for measuring attitudes and opinions.

  • On a scale of 1 to 10, how likely are you to recommend this product to a friend?

Open-Ended Questions

These questions allow participants to answer in their own words and provide more in-depth and detailed responses. They are useful for getting qualitative data.

  • What do you think are the biggest challenges facing your community?

Likert Scale Questions

These questions ask participants to rate how much they agree or disagree with a statement. They are useful for measuring attitudes and opinions.

How strongly do you agree or disagree with the following statement:

“I enjoy exercising regularly.”

  • a . Strongly Agree
  • c . Neither Agree nor Disagree
  • d . Disagree
  • e . Strongly Disagree

Demographic Questions

These questions ask about the participant’s personal information such as age, gender, ethnicity, education level, etc. They are useful for segmenting the data and analyzing results by demographic groups.

  • What is your age?

Yes/No Questions

These questions only have two options: Yes or No. They are useful for getting simple, straightforward answers to a specific question.

Have you ever traveled outside of your home country?

Ranking Questions

These questions ask participants to rank several items in order of preference or importance. They are useful for measuring priorities or preferences.

Please rank the following factors in order of importance when choosing a restaurant:

  • a. Quality of Food
  • c. Ambiance
  • d. Location

Matrix Questions

These questions present a matrix or grid of options that participants can choose from. They are useful for getting data on multiple variables at once.

Dichotomous Questions

These questions present two options that are opposite or contradictory. They are useful for measuring binary or polarized attitudes.

Do you support the death penalty?

How to Make a Questionnaire

Step-by-Step Guide for Making a Questionnaire:

  • Define your research objectives: Before you start creating questions, you need to define the purpose of your questionnaire and what you hope to achieve from the data you collect.
  • Choose the appropriate question types: Based on your research objectives, choose the appropriate question types to collect the data you need. Refer to the types of questions mentioned earlier for guidance.
  • Develop questions: Develop clear and concise questions that are easy for participants to understand. Avoid leading or biased questions that might influence the responses.
  • Organize questions: Organize questions in a logical and coherent order, starting with demographic questions followed by general questions, and ending with specific or sensitive questions.
  • Pilot the questionnaire : Test your questionnaire on a small group of participants to identify any flaws or issues with the questions or the format.
  • Refine the questionnaire : Based on feedback from the pilot, refine and revise the questionnaire as necessary to ensure that it is valid and reliable.
  • Distribute the questionnaire: Distribute the questionnaire to your target audience using a method that is appropriate for your research objectives, such as online surveys, email, or paper surveys.
  • Collect and analyze data: Collect the completed questionnaires and analyze the data using appropriate statistical methods. Draw conclusions from the data and use them to inform decision-making or further research.
  • Report findings: Present your findings in a clear and concise report, including a summary of the research objectives, methodology, key findings, and recommendations.

Questionnaire Administration Modes

There are several modes of questionnaire administration. The choice of mode depends on the research objectives, sample size, and available resources. Some common modes of administration include:

  • Self-administered paper questionnaires: Participants complete the questionnaire on paper, either in person or by mail. This mode is relatively low cost and easy to administer, but it may result in lower response rates and greater potential for errors in data entry.
  • Online questionnaires: Participants complete the questionnaire on a website or through email. This mode is convenient for both researchers and participants, as it allows for fast and easy data collection. However, it may be subject to issues such as low response rates, lack of internet access, and potential for fraudulent responses.
  • Telephone surveys: Trained interviewers administer the questionnaire over the phone. This mode allows for a large sample size and can result in higher response rates, but it is also more expensive and time-consuming than other modes.
  • Face-to-face interviews : Trained interviewers administer the questionnaire in person. This mode allows for a high degree of control over the survey environment and can result in higher response rates, but it is also more expensive and time-consuming than other modes.
  • Mixed-mode surveys: Researchers use a combination of two or more modes to administer the questionnaire, such as using online questionnaires for initial screening and following up with telephone interviews for more detailed information. This mode can help overcome some of the limitations of individual modes, but it requires careful planning and coordination.

Example of Questionnaire

Title of the Survey: Customer Satisfaction Survey

Introduction:

We appreciate your business and would like to ensure that we are meeting your needs. Please take a few minutes to complete this survey so that we can better understand your experience with our products and services. Your feedback is important to us and will help us improve our offerings.

Instructions:

Please read each question carefully and select the response that best reflects your experience. If you have any additional comments or suggestions, please feel free to include them in the space provided at the end of the survey.

1. How satisfied are you with our product quality?

  • Very satisfied
  • Somewhat satisfied
  • Somewhat dissatisfied
  • Very dissatisfied

2. How satisfied are you with our customer service?

3. How satisfied are you with the price of our products?

4. How likely are you to recommend our products to others?

  • Very likely
  • Somewhat likely
  • Somewhat unlikely
  • Very unlikely

5. How easy was it to find the information you were looking for on our website?

  • Somewhat easy
  • Somewhat difficult
  • Very difficult

6. How satisfied are you with the overall experience of using our products and services?

7. Is there anything that you would like to see us improve upon or change in the future?

…………………………………………………………………………………………………………………………..

Conclusion:

Thank you for taking the time to complete this survey. Your feedback is valuable to us and will help us improve our products and services. If you have any further comments or concerns, please do not hesitate to contact us.

Applications of Questionnaire

Some common applications of questionnaires include:

  • Research : Questionnaires are commonly used in research to gather information from participants about their attitudes, opinions, behaviors, and experiences. This information can then be analyzed and used to draw conclusions and make inferences.
  • Healthcare : In healthcare, questionnaires can be used to gather information about patients’ medical history, symptoms, and lifestyle habits. This information can help healthcare professionals diagnose and treat medical conditions more effectively.
  • Marketing : Questionnaires are commonly used in marketing to gather information about consumers’ preferences, buying habits, and opinions on products and services. This information can help businesses develop and market products more effectively.
  • Human Resources: Questionnaires are used in human resources to gather information from job applicants, employees, and managers about job satisfaction, performance, and workplace culture. This information can help organizations improve their hiring practices, employee retention, and organizational culture.
  • Education : Questionnaires are used in education to gather information from students, teachers, and parents about their perceptions of the educational experience. This information can help educators identify areas for improvement and develop more effective teaching strategies.

Purpose of Questionnaire

Some common purposes of questionnaires include:

  • To collect information on attitudes, opinions, and beliefs: Questionnaires can be used to gather information on people’s attitudes, opinions, and beliefs on a particular topic. For example, a questionnaire can be used to gather information on people’s opinions about a particular political issue.
  • To collect demographic information: Questionnaires can be used to collect demographic information such as age, gender, income, education level, and occupation. This information can be used to analyze trends and patterns in the data.
  • To measure behaviors or experiences: Questionnaires can be used to gather information on behaviors or experiences such as health-related behaviors or experiences, job satisfaction, or customer satisfaction.
  • To evaluate programs or interventions: Questionnaires can be used to evaluate the effectiveness of programs or interventions by gathering information on participants’ experiences, opinions, and behaviors.
  • To gather information for research: Questionnaires can be used to gather data for research purposes on a variety of topics.

When to use Questionnaire

Here are some situations when questionnaires might be used:

  • When you want to collect data from a large number of people: Questionnaires are useful when you want to collect data from a large number of people. They can be distributed to a wide audience and can be completed at the respondent’s convenience.
  • When you want to collect data on specific topics: Questionnaires are useful when you want to collect data on specific topics or research questions. They can be designed to ask specific questions and can be used to gather quantitative data that can be analyzed statistically.
  • When you want to compare responses across groups: Questionnaires are useful when you want to compare responses across different groups of people. For example, you might want to compare responses from men and women, or from people of different ages or educational backgrounds.
  • When you want to collect data anonymously: Questionnaires can be useful when you want to collect data anonymously. Respondents can complete the questionnaire without fear of judgment or repercussions, which can lead to more honest and accurate responses.
  • When you want to save time and resources: Questionnaires can be more efficient and cost-effective than other methods of data collection such as interviews or focus groups. They can be completed quickly and easily, and can be analyzed using software to save time and resources.

Characteristics of Questionnaire

Here are some of the characteristics of questionnaires:

  • Standardization : Questionnaires are standardized tools that ask the same questions in the same order to all respondents. This ensures that all respondents are answering the same questions and that the responses can be compared and analyzed.
  • Objectivity : Questionnaires are designed to be objective, meaning that they do not contain leading questions or bias that could influence the respondent’s answers.
  • Predefined responses: Questionnaires typically provide predefined response options for the respondents to choose from, which helps to standardize the responses and make them easier to analyze.
  • Quantitative data: Questionnaires are designed to collect quantitative data, meaning that they provide numerical or categorical data that can be analyzed using statistical methods.
  • Convenience : Questionnaires are convenient for both the researcher and the respondents. They can be distributed and completed at the respondent’s convenience and can be easily administered to a large number of people.
  • Anonymity : Questionnaires can be anonymous, which can encourage respondents to answer more honestly and provide more accurate data.
  • Reliability : Questionnaires are designed to be reliable, meaning that they produce consistent results when administered multiple times to the same group of people.
  • Validity : Questionnaires are designed to be valid, meaning that they measure what they are intended to measure and are not influenced by other factors.

Advantage of Questionnaire

Some Advantage of Questionnaire are as follows:

  • Standardization: Questionnaires allow researchers to ask the same questions to all participants in a standardized manner. This helps ensure consistency in the data collected and eliminates potential bias that might arise if questions were asked differently to different participants.
  • Efficiency: Questionnaires can be administered to a large number of people at once, making them an efficient way to collect data from a large sample.
  • Anonymity: Participants can remain anonymous when completing a questionnaire, which may make them more likely to answer honestly and openly.
  • Cost-effective: Questionnaires can be relatively inexpensive to administer compared to other research methods, such as interviews or focus groups.
  • Objectivity: Because questionnaires are typically designed to collect quantitative data, they can be analyzed objectively without the influence of the researcher’s subjective interpretation.
  • Flexibility: Questionnaires can be adapted to a wide range of research questions and can be used in various settings, including online surveys, mail surveys, or in-person interviews.

Limitations of Questionnaire

Limitations of Questionnaire are as follows:

  • Limited depth: Questionnaires are typically designed to collect quantitative data, which may not provide a complete understanding of the topic being studied. Questionnaires may miss important details and nuances that could be captured through other research methods, such as interviews or observations.
  • R esponse bias: Participants may not always answer questions truthfully or accurately, either because they do not remember or because they want to present themselves in a particular way. This can lead to response bias, which can affect the validity and reliability of the data collected.
  • Limited flexibility: While questionnaires can be adapted to a wide range of research questions, they may not be suitable for all types of research. For example, they may not be appropriate for studying complex phenomena or for exploring participants’ experiences and perceptions in-depth.
  • Limited context: Questionnaires typically do not provide a rich contextual understanding of the topic being studied. They may not capture the broader social, cultural, or historical factors that may influence participants’ responses.
  • Limited control : Researchers may not have control over how participants complete the questionnaire, which can lead to variations in response quality or consistency.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Case Study Research

Case Study – Methods, Examples and Guide

Observational Research

Observational Research – Methods and Guide

Quantitative Research

Quantitative Research – Methods, Types and...

Qualitative Research Methods

Qualitative Research Methods

Explanatory Research

Explanatory Research – Types, Methods, Guide

Survey Research

Survey Research – Types, Methods, Examples

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Writing Survey Questions

Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions. Creating good measures involves both writing good questions and organizing them to form the questionnaire.

Questionnaire design is a multistage process that requires attention to many details at once. Designing the questionnaire is complicated because surveys can ask about topics in varying degrees of detail, questions can be asked in different ways, and questions asked earlier in a survey may influence how people respond to later questions. Researchers are also often interested in measuring change over time and therefore must be attentive to how opinions or behaviors have been measured in prior surveys.

Surveyors may conduct pilot tests or focus groups in the early stages of questionnaire development in order to better understand how people think about an issue or comprehend a question. Pretesting a survey is an essential step in the questionnaire design process to evaluate how people respond to the overall questionnaire and specific questions, especially when questions are being introduced for the first time.

For many years, surveyors approached questionnaire design as an art, but substantial research over the past forty years has demonstrated that there is a lot of science involved in crafting a good survey questionnaire. Here, we discuss the pitfalls and best practices of designing questionnaires.

Question development

There are several steps involved in developing a survey questionnaire. The first is identifying what topics will be covered in the survey. For Pew Research Center surveys, this involves thinking about what is happening in our nation and the world and what will be relevant to the public, policymakers and the media. We also track opinion on a variety of issues over time so we often ensure that we update these trends on a regular basis to better understand whether people’s opinions are changing.

At Pew Research Center, questionnaire development is a collaborative and iterative process where staff meet to discuss drafts of the questionnaire several times over the course of its development. We frequently test new survey questions ahead of time through qualitative research methods such as  focus groups , cognitive interviews, pretesting (often using an  online, opt-in sample ), or a combination of these approaches. Researchers use insights from this testing to refine questions before they are asked in a production survey, such as on the ATP.

Measuring change over time

Many surveyors want to track changes over time in people’s attitudes, opinions and behaviors. To measure change, questions are asked at two or more points in time. A cross-sectional design surveys different people in the same population at multiple points in time. A panel, such as the ATP, surveys the same people over time. However, it is common for the set of people in survey panels to change over time as new panelists are added and some prior panelists drop out. Many of the questions in Pew Research Center surveys have been asked in prior polls. Asking the same questions at different points in time allows us to report on changes in the overall views of the general public (or a subset of the public, such as registered voters, men or Black Americans), or what we call “trending the data”.

When measuring change over time, it is important to use the same question wording and to be sensitive to where the question is asked in the questionnaire to maintain a similar context as when the question was asked previously (see  question wording  and  question order  for further information). All of our survey reports include a topline questionnaire that provides the exact question wording and sequencing, along with results from the current survey and previous surveys in which we asked the question.

The Center’s transition from conducting U.S. surveys by live telephone interviewing to an online panel (around 2014 to 2020) complicated some opinion trends, but not others. Opinion trends that ask about sensitive topics (e.g., personal finances or attending religious services ) or that elicited volunteered answers (e.g., “neither” or “don’t know”) over the phone tended to show larger differences than other trends when shifting from phone polls to the online ATP. The Center adopted several strategies for coping with changes to data trends that may be related to this change in methodology. If there is evidence suggesting that a change in a trend stems from switching from phone to online measurement, Center reports flag that possibility for readers to try to head off confusion or erroneous conclusions.

Open- and closed-ended questions

One of the most significant decisions that can affect how people answer questions is whether the question is posed as an open-ended question, where respondents provide a response in their own words, or a closed-ended question, where they are asked to choose from a list of answer choices.

For example, in a poll conducted after the 2008 presidential election, people responded very differently to two versions of the question: “What one issue mattered most to you in deciding how you voted for president?” One was closed-ended and the other open-ended. In the closed-ended version, respondents were provided five options and could volunteer an option not on the list.

When explicitly offered the economy as a response, more than half of respondents (58%) chose this answer; only 35% of those who responded to the open-ended version volunteered the economy. Moreover, among those asked the closed-ended version, fewer than one-in-ten (8%) provided a response other than the five they were read. By contrast, fully 43% of those asked the open-ended version provided a response not listed in the closed-ended version of the question. All of the other issues were chosen at least slightly more often when explicitly offered in the closed-ended version than in the open-ended version. (Also see  “High Marks for the Campaign, a High Bar for Obama”  for more information.)

questionnaire essay writing

Researchers will sometimes conduct a pilot study using open-ended questions to discover which answers are most common. They will then develop closed-ended questions based off that pilot study that include the most common responses as answer choices. In this way, the questions may better reflect what the public is thinking, how they view a particular issue, or bring certain issues to light that the researchers may not have been aware of.

When asking closed-ended questions, the choice of options provided, how each option is described, the number of response options offered, and the order in which options are read can all influence how people respond. One example of the impact of how categories are defined can be found in a Pew Research Center poll conducted in January 2002. When half of the sample was asked whether it was “more important for President Bush to focus on domestic policy or foreign policy,” 52% chose domestic policy while only 34% said foreign policy. When the category “foreign policy” was narrowed to a specific aspect – “the war on terrorism” – far more people chose it; only 33% chose domestic policy while 52% chose the war on terrorism.

In most circumstances, the number of answer choices should be kept to a relatively small number – just four or perhaps five at most – especially in telephone surveys. Psychological research indicates that people have a hard time keeping more than this number of choices in mind at one time. When the question is asking about an objective fact and/or demographics, such as the religious affiliation of the respondent, more categories can be used. In fact, they are encouraged to ensure inclusivity. For example, Pew Research Center’s standard religion questions include more than 12 different categories, beginning with the most common affiliations (Protestant and Catholic). Most respondents have no trouble with this question because they can expect to see their religious group within that list in a self-administered survey.

In addition to the number and choice of response options offered, the order of answer categories can influence how people respond to closed-ended questions. Research suggests that in telephone surveys respondents more frequently choose items heard later in a list (a “recency effect”), and in self-administered surveys, they tend to choose items at the top of the list (a “primacy” effect).

Because of concerns about the effects of category order on responses to closed-ended questions, many sets of response options in Pew Research Center’s surveys are programmed to be randomized to ensure that the options are not asked in the same order for each respondent. Rotating or randomizing means that questions or items in a list are not asked in the same order to each respondent. Answers to questions are sometimes affected by questions that precede them. By presenting questions in a different order to each respondent, we ensure that each question gets asked in the same context as every other question the same number of times (e.g., first, last or any position in between). This does not eliminate the potential impact of previous questions on the current question, but it does ensure that this bias is spread randomly across all of the questions or items in the list. For instance, in the example discussed above about what issue mattered most in people’s vote, the order of the five issues in the closed-ended version of the question was randomized so that no one issue appeared early or late in the list for all respondents. Randomization of response items does not eliminate order effects, but it does ensure that this type of bias is spread randomly.

Questions with ordinal response categories – those with an underlying order (e.g., excellent, good, only fair, poor OR very favorable, mostly favorable, mostly unfavorable, very unfavorable) – are generally not randomized because the order of the categories conveys important information to help respondents answer the question. Generally, these types of scales should be presented in order so respondents can easily place their responses along the continuum, but the order can be reversed for some respondents. For example, in one of Pew Research Center’s questions about abortion, half of the sample is asked whether abortion should be “legal in all cases, legal in most cases, illegal in most cases, illegal in all cases,” while the other half of the sample is asked the same question with the response categories read in reverse order, starting with “illegal in all cases.” Again, reversing the order does not eliminate the recency effect but distributes it randomly across the population.

Question wording

The choice of words and phrases in a question is critical in expressing the meaning and intent of the question to the respondent and ensuring that all respondents interpret the question the same way. Even small wording differences can substantially affect the answers people provide.

[View more Methods 101 Videos ]

An example of a wording difference that had a significant impact on responses comes from a January 2003 Pew Research Center survey. When people were asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule,” 68% said they favored military action while 25% said they opposed military action. However, when asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule  even if it meant that U.S. forces might suffer thousands of casualties, ” responses were dramatically different; only 43% said they favored military action, while 48% said they opposed it. The introduction of U.S. casualties altered the context of the question and influenced whether people favored or opposed military action in Iraq.

There has been a substantial amount of research to gauge the impact of different ways of asking questions and how to minimize differences in the way respondents interpret what is being asked. The issues related to question wording are more numerous than can be treated adequately in this short space, but below are a few of the important things to consider:

First, it is important to ask questions that are clear and specific and that each respondent will be able to answer. If a question is open-ended, it should be evident to respondents that they can answer in their own words and what type of response they should provide (an issue or problem, a month, number of days, etc.). Closed-ended questions should include all reasonable responses (i.e., the list of options is exhaustive) and the response categories should not overlap (i.e., response options should be mutually exclusive). Further, it is important to discern when it is best to use forced-choice close-ended questions (often denoted with a radio button in online surveys) versus “select-all-that-apply” lists (or check-all boxes). A 2019 Center study found that forced-choice questions tend to yield more accurate responses, especially for sensitive questions.  Based on that research, the Center generally avoids using select-all-that-apply questions.

It is also important to ask only one question at a time. Questions that ask respondents to evaluate more than one concept (known as double-barreled questions) – such as “How much confidence do you have in President Obama to handle domestic and foreign policy?” – are difficult for respondents to answer and often lead to responses that are difficult to interpret. In this example, it would be more effective to ask two separate questions, one about domestic policy and another about foreign policy.

In general, questions that use simple and concrete language are more easily understood by respondents. It is especially important to consider the education level of the survey population when thinking about how easy it will be for respondents to interpret and answer a question. Double negatives (e.g., do you favor or oppose  not  allowing gays and lesbians to legally marry) or unfamiliar abbreviations or jargon (e.g., ANWR instead of Arctic National Wildlife Refuge) can result in respondent confusion and should be avoided.

Similarly, it is important to consider whether certain words may be viewed as biased or potentially offensive to some respondents, as well as the emotional reaction that some words may provoke. For example, in a 2005 Pew Research Center survey, 51% of respondents said they favored “making it legal for doctors to give terminally ill patients the means to end their lives,” but only 44% said they favored “making it legal for doctors to assist terminally ill patients in committing suicide.” Although both versions of the question are asking about the same thing, the reaction of respondents was different. In another example, respondents have reacted differently to questions using the word “welfare” as opposed to the more generic “assistance to the poor.” Several experiments have shown that there is much greater public support for expanding “assistance to the poor” than for expanding “welfare.”

We often write two versions of a question and ask half of the survey sample one version of the question and the other half the second version. Thus, we say we have two  forms  of the questionnaire. Respondents are assigned randomly to receive either form, so we can assume that the two groups of respondents are essentially identical. On questions where two versions are used, significant differences in the answers between the two forms tell us that the difference is a result of the way we worded the two versions.

questionnaire essay writing

One of the most common formats used in survey questions is the “agree-disagree” format. In this type of question, respondents are asked whether they agree or disagree with a particular statement. Research has shown that, compared with the better educated and better informed, less educated and less informed respondents have a greater tendency to agree with such statements. This is sometimes called an “acquiescence bias” (since some kinds of respondents are more likely to acquiesce to the assertion than are others). This behavior is even more pronounced when there’s an interviewer present, rather than when the survey is self-administered. A better practice is to offer respondents a choice between alternative statements. A Pew Research Center experiment with one of its routinely asked values questions illustrates the difference that question format can make. Not only does the forced choice format yield a very different result overall from the agree-disagree format, but the pattern of answers between respondents with more or less formal education also tends to be very different.

One other challenge in developing questionnaires is what is called “social desirability bias.” People have a natural tendency to want to be accepted and liked, and this may lead people to provide inaccurate answers to questions that deal with sensitive subjects. Research has shown that respondents understate alcohol and drug use, tax evasion and racial bias. They also may overstate church attendance, charitable contributions and the likelihood that they will vote in an election. Researchers attempt to account for this potential bias in crafting questions about these topics. For instance, when Pew Research Center surveys ask about past voting behavior, it is important to note that circumstances may have prevented the respondent from voting: “In the 2012 presidential election between Barack Obama and Mitt Romney, did things come up that kept you from voting, or did you happen to vote?” The choice of response options can also make it easier for people to be honest. For example, a question about church attendance might include three of six response options that indicate infrequent attendance. Research has also shown that social desirability bias can be greater when an interviewer is present (e.g., telephone and face-to-face surveys) than when respondents complete the survey themselves (e.g., paper and web surveys).

Lastly, because slight modifications in question wording can affect responses, identical question wording should be used when the intention is to compare results to those from earlier surveys. Similarly, because question wording and responses can vary based on the mode used to survey respondents, researchers should carefully evaluate the likely effects on trend measurements if a different survey mode will be used to assess change in opinion over time.

Question order

Once the survey questions are developed, particular attention should be paid to how they are ordered in the questionnaire. Surveyors must be attentive to how questions early in a questionnaire may have unintended effects on how respondents answer subsequent questions. Researchers have demonstrated that the order in which questions are asked can influence how people respond; earlier questions can unintentionally provide context for the questions that follow (these effects are called “order effects”).

One kind of order effect can be seen in responses to open-ended questions. Pew Research Center surveys generally ask open-ended questions about national problems, opinions about leaders and similar topics near the beginning of the questionnaire. If closed-ended questions that relate to the topic are placed before the open-ended question, respondents are much more likely to mention concepts or considerations raised in those earlier questions when responding to the open-ended question.

For closed-ended opinion questions, there are two main types of order effects: contrast effects ( where the order results in greater differences in responses), and assimilation effects (where responses are more similar as a result of their order).

questionnaire essay writing

An example of a contrast effect can be seen in a Pew Research Center poll conducted in October 2003, a dozen years before same-sex marriage was legalized in the U.S. That poll found that people were more likely to favor allowing gays and lesbians to enter into legal agreements that give them the same rights as married couples when this question was asked after one about whether they favored or opposed allowing gays and lesbians to marry (45% favored legal agreements when asked after the marriage question, but 37% favored legal agreements without the immediate preceding context of a question about same-sex marriage). Responses to the question about same-sex marriage, meanwhile, were not significantly affected by its placement before or after the legal agreements question.

questionnaire essay writing

Another experiment embedded in a December 2008 Pew Research Center poll also resulted in a contrast effect. When people were asked “All in all, are you satisfied or dissatisfied with the way things are going in this country today?” immediately after having been asked “Do you approve or disapprove of the way George W. Bush is handling his job as president?”; 88% said they were dissatisfied, compared with only 78% without the context of the prior question.

Responses to presidential approval remained relatively unchanged whether national satisfaction was asked before or after it. A similar finding occurred in December 2004 when both satisfaction and presidential approval were much higher (57% were dissatisfied when Bush approval was asked first vs. 51% when general satisfaction was asked first).

Several studies also have shown that asking a more specific question before a more general question (e.g., asking about happiness with one’s marriage before asking about one’s overall happiness) can result in a contrast effect. Although some exceptions have been found, people tend to avoid redundancy by excluding the more specific question from the general rating.

Assimilation effects occur when responses to two questions are more consistent or closer together because of their placement in the questionnaire. We found an example of an assimilation effect in a Pew Research Center poll conducted in November 2008 when we asked whether Republican leaders should work with Obama or stand up to him on important issues and whether Democratic leaders should work with Republican leaders or stand up to them on important issues. People were more likely to say that Republican leaders should work with Obama when the question was preceded by the one asking what Democratic leaders should do in working with Republican leaders (81% vs. 66%). However, when people were first asked about Republican leaders working with Obama, fewer said that Democratic leaders should work with Republican leaders (71% vs. 82%).

The order questions are asked is of particular importance when tracking trends over time. As a result, care should be taken to ensure that the context is similar each time a question is asked. Modifying the context of the question could call into question any observed changes over time (see  measuring change over time  for more information).

A questionnaire, like a conversation, should be grouped by topic and unfold in a logical order. It is often helpful to begin the survey with simple questions that respondents will find interesting and engaging. Throughout the survey, an effort should be made to keep the survey interesting and not overburden respondents with several difficult questions right after one another. Demographic questions such as income, education or age should not be asked near the beginning of a survey unless they are needed to determine eligibility for the survey or for routing respondents through particular sections of the questionnaire. Even then, it is best to precede such items with more interesting and engaging questions. One virtue of survey panels like the ATP is that demographic questions usually only need to be asked once a year, not in each survey.

U.S. Surveys

Other research methods, sign up for our weekly newsletter.

Fresh data delivered Saturday mornings

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Age & Generations
  • Coronavirus (COVID-19)
  • Economy & Work
  • Family & Relationships
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • Methodological Research
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Politics & Policy
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Terms & Conditions

Privacy Policy

Cookie Settings

Reprints, Permissions & Use Policy

A strong analytical question

  • speaks to a genuine dilemma presented by your sources . In other words, the question focuses on a real confusion, problem, ambiguity, or gray area, about which readers will conceivably have different reactions, opinions, or ideas.  
  • yields an answer that is not obvious . If you ask, "What did this author say about this topic?” there’s nothing to explore because any reader of that text would answer that question in the same way. But if you ask, “how can we reconcile point A and point B in this text,” readers will want to see how you solve that inconsistency in your essay.  
  • suggests an answer complex enough to require a whole essay's worth of discussion. If the question is too vague, it won't suggest a line of argument. The question should elicit reflection and argument rather than summary or description.  
  • can be explored using the sources you have available for the assignment , rather than by generalizations or by research beyond the scope of your assignment.  

How to come up with an analytical question  

One useful starting point when you’re trying to identify an analytical question is to look for points of tension in your sources, either within one source or among sources. It can be helpful to think of those points of tension as the moments where you need to stop and think before you can move forward. Here are some examples of where you may find points of tension:

  • You may read a published view that doesn’t seem convincing to you, and you may want to ask a question about what’s missing or about how the evidence might be reconsidered.  
  • You may notice an inconsistency, gap, or ambiguity in the evidence, and you may want to explore how that changes your understanding of something.  
  • You may identify an unexpected wrinkle that you think deserves more attention, and you may want to ask a question about it.  
  • You may notice an unexpected conclusion that you think doesn’t quite add up, and you may want to ask how the authors of a source reached that conclusion.  
  • You may identify a controversy that you think needs to be addressed, and you may want to ask a question about how it might be resolved.  
  • You may notice a problem that you think has been ignored, and you may want to try to solve it or consider why it has been ignored.  
  • You may encounter a piece of evidence that you think warrants a closer look, and you may raise questions about it.  

Once you’ve identified a point of tension and raised a question about it, you will try to answer that question in your essay. Your main idea or claim in answer to that question will be your thesis.

point of tension --> analytical question --> thesis

  • "How" and "why" questions generally require more analysis than "who/ what/when/where” questions.  
  • Good analytical questions can highlight patterns/connections, or contradictions/dilemmas/problems.  
  • Good analytical questions establish the scope of an argument, allowing you to focus on a manageable part of a broad topic or a collection of sources.  
  • Good analytical questions can also address implications or consequences of your analysis.
  • picture_as_pdf Asking Analytical Questions

Essay writing: Analysing questions

  • Introductions
  • Conclusions
  • Analysing questions
  • Planning & drafting
  • Revising & editing
  • Proofreading
  • Essay writing videos

Jump to content on this page:

“It is well worth the time to break down the question into its different elements.” Kathleen McMillan & Jonathan Weyers,  How to Write Essays & Assignments

When you get an essay question, how do you make sure you are answering it how your tutor wants? There is a hidden code in most questions that gives you a clue about the approach you should be taking...

Decoding the question

Here is a typical essay question:

Analyse the impact of the employability agenda on the undergraduate student experience.

Let's decode it...

Q=Analyse the economic impacts of a no-deal Brexit for the import industry. Analyse=instruction; the employability agenda=key issue/subject; the undergraduate student experience=focus/constraint

Understanding the instruction words

Did you know that analyse  means something different to discuss  or evaluate ?  In academic writing these have very specific and unique meanings - which you need to make sure you are aware of before you start your essay planning. For example:

Examine critically so as to bring out the essential elements; describe in detail; describe the various parts of something and explain how they work together, or whether they work together.

It is almost impossible to remember the different meanings, so download our Glossary of Instruction Words for Essay Questions to keep your own reminder of the most common ones.

Redundant phrases

Don't get thrown by other regularly used phrases such as "with reference to relevant literature" or "critically evaluate" and "critically analyse" (rather than simply "evaluate" or "analyse").   All  your writing should refer to relevant literature and all  writing should have an element of criticality at university level. These are just redundant phrases/words and only there as a gentle reminder.

Recognise the subject of the question

Many students think this is the easy bit - but you can easily mistake the focus for the subject and vice versa.  The subject is the general topic of the essay and the instruction word is usually referring to something you must do to that topic .

Lecture presenting a lecture on the topic of the essay

Usually, the subject is something you have had a lecture about or there are chapters about in your key texts.

There will be many aspects of the subject/topic that you will not need to include in your essay, which is why it is important to recognise and stick to the focus as shown in the next box.

Identify the focus/constraint

Every essay has and needs a  focus .  If you were to write everything about a topic, even about a particular aspect of a topic, you could write a book and not an essay!  The focus gives you direction about the scope of the essay.  It usually does one of two things:

Lecturer showing a slide about the focus of the essay

  • Gives context (focus on the topic within a particular situation, time frame etc).

This could be something there were a few slides about in your lecture or a subheading in your key text.

I don't have an essay question - what do I do?

I have to make up my own title.

If you have been asked to come up with your own title, write one like the ones described here. Include at least an instruction, a subject and a focus and it will make planning and writing the essay so much easier. The main difference would be that you write it as a description rather than a question i.e.:

An analysis of the impact of the employability agenda on the undergraduate student experience.

I have only been given assignment criteria

If you have been given assignment criteria, the question often still contains the information you need to break it down into the components on this page. For example, look at the criteria below. There are still instruction words, subjects and focus/constraints.

Aims of the assignment (3000 words):

An understanding of learning theories is important to being an effective teacher. In this assignment you will select two learning theories and explain why they would help you in your own teaching context. You will then reflect on an experience from your teaching practice when this was, or could have been, put into practice.

Assignment criteria

Select two learning theories , referring to published literature, explain why they are relevant to your own teaching context.

Reflect on an experience from your teaching practice .

Explain why a knowledge of a learning theory was or would have been useful in the circumstances .

  • Instructions words = explain (twice); reflect on.
  • Subjects = two learning theories; an experience from your teaching practice; knowledge of a learning theory.
  • Focus/constraints = your own teaching context; in the circumstances

Think of each criterion therefore as a mini essay. 

  • << Previous: Formatting
  • Next: Planning & drafting >>
  • Last Updated: Nov 3, 2023 3:17 PM
  • URL: https://libguides.hull.ac.uk/essays
  • Login to LibApps
  • Library websites Privacy Policy
  • University of Hull privacy policy & cookies
  • Website terms and conditions
  • Accessibility
  • Report a problem

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Employee Exit Interviews
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories

Market Research

  • Artificial Intelligence
  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO

Your ultimate guide to questionnaires and how to design a good one

The written questionnaire is the heart and soul of any survey research project. Whether you conduct your survey using an online questionnaire, in person, by email or over the phone, the way you design your questionnaire plays a critical role in shaping the quality of the data and insights that you’ll get from your target audience. Keep reading to get actionable tips.

What is a questionnaire?

A questionnaire is a research tool consisting of a set of questions or other ‘prompts’ to collect data from a set of respondents.

When used in most research, a questionnaire will consist of a number of types of questions (primarily open-ended and closed) in order to gain both quantitative data that can be analyzed to draw conclusions, and qualitative data to provide longer, more specific explanations.

A research questionnaire is often mistaken for a survey - and many people use the term questionnaire and survey, interchangeably.

But that’s incorrect.

Which is what we talk about next.

Get started with our free survey maker with 50+ templates

Survey vs. questionnaire – what’s the difference?

Before we go too much further, let’s consider the differences between surveys and questionnaires.

These two terms are often used interchangeably, but there is an important difference between them.

Survey definition

A survey is the process of collecting data from a set of respondents and using it to gather insights.

Survey research can be conducted using a questionnaire, but won’t always involve one.

Questionnaire definition

A questionnaire is the list of questions you circulate to your target audience.

In other words, the survey is the task you’re carrying out, and the questionnaire is the instrument you’re using to do it.

By itself, a questionnaire doesn’t achieve much.

It’s when you put it into action as part of a survey that you start to get results.

Advantages vs disadvantages of using a questionnaire

While a questionnaire is a popular method to gather data for market research or other studies, there are a few disadvantages to using this method (although there are plenty of advantages to using a questionnaire too).

Let’s have a look at some of the advantages and disadvantages of using a questionnaire for collecting data.

Advantages of using a questionnaire

1. questionnaires are relatively cheap.

Depending on the complexity of your study, using a questionnaire can be cost effective compared to other methods.

You simply need to write your survey questionnaire, and send it out and then process the responses.

You can set up an online questionnaire relatively easily, or simply carry out market research on the street if that’s the best method.

2. You can get and analyze results quickly

Again depending on the size of your survey you can get results back from a questionnaire quickly, often within 24 hours of putting the questionnaire live.

It also means you can start to analyze responses quickly too.

3. They’re easily scalable

You can easily send an online questionnaire to anyone in the world and with the right software you can quickly identify your target audience and your questionnaire to them.

4. Questionnaires are easy to analyze

If your questionnaire design has been done properly, it’s quick and easy to analyze results from questionnaires once responses start to come back.

This is particularly useful with large scale market research projects.

Because all respondents are answering the same questions, it’s simple to identify trends.

5. You can use the results to make accurate decisions

As a research instrument, a questionnaire is ideal for commercial research because the data you get back is from your target audience (or ideal customers) and the information you get back on their thoughts, preferences or behaviors allows you to make business decisions.

6. A questionnaire can cover any topic

One of the biggest advantages of using questionnaires when conducting research is (because you can adapt them using different types and styles of open ended questions and closed ended questions) they can be used to gather data on almost any topic.

There are many types of questionnaires you can design to gather both quantitative data and qualitative data - so they’re a useful tool for all kinds of data analysis.

Disadvantages of using a questionnaire

1. respondents could lie.

This is by far the biggest risk with a questionnaire, especially when dealing with sensitive topics.

Rather than give their actual opinion, a respondent might feel pressured to give the answer they deem more socially acceptable, which doesn’t give you accurate results.

2. Respondents might not answer every question

There are all kinds of reasons respondents might not answer every question, from questionnaire length, they might not understand what’s being asked, or they simply might not want to answer it.

If you get questionnaires back without complete responses it could negatively affect your research data and provide an inaccurate picture.

3. They might interpret what’s being asked incorrectly

This is a particular problem when running a survey across geographical boundaries and often comes down to the design of the survey questionnaire.

If your questions aren’t written in a very clear way, the respondent might misunderstand what’s being asked and provide an answer that doesn’t reflect what they actually think.

Again this can negatively affect your research data.

4. You could introduce bias

The whole point of producing a questionnaire is to gather accurate data from which decisions can be made or conclusions drawn.

But the data collected can be heavily impacted if the researchers accidentally introduce bias into the questions.

This can be easily done if the researcher is trying to prove a certain hypothesis with their questionnaire, and unwittingly write questions that push people towards giving a certain answer.

In these cases respondents’ answers won’t accurately reflect what is really happening and stop you gathering more accurate data.

5. Respondents could get survey fatigue

One issue you can run into when sending out a questionnaire, particularly if you send them out regularly to the same survey sample, is that your respondents could start to suffer from survey fatigue.

In these circumstances, rather than thinking about the response options in the questionnaire and providing accurate answers, respondents could start to just tick boxes to get through the questionnaire quickly.

Again, this won’t give you an accurate data set.

Questionnaire design: How to do it

It’s essential to carefully craft a questionnaire to reduce survey error and optimize your data . The best way to think about the questionnaire is with the end result in mind.

How do you do that?

Start with questions, like:

  • What is my research purpose ?
  • What data do I need?
  • How am I going to analyze that data?
  • What questions are needed to best suit these variables?

Once you have a clear idea of the purpose of your survey, you’ll be in a better position to create an effective questionnaire.

Here are a few steps to help you get into the right mindset.

1. Keep the respondent front and center

A survey is the process of collecting information from people, so it needs to be designed around human beings first and foremost.

In his post about survey design theory, David Vannette, PhD, from the Qualtrics Methodology Lab explains the correlation between the way a survey is designed and the quality of data that is extracted.

“To begin designing an effective survey, take a step back and try to understand what goes on in your respondents’ heads when they are taking your survey.

This step is critical to making sure that your questionnaire makes it as likely as possible that the response process follows that expected path.”

From writing the questions to designing the survey flow, the respondent’s point of view should always be front and center in your mind during a questionnaire design.

2. How to write survey questions

Your questionnaire should only be as long as it needs to be, and every question needs to deliver value.

That means your questions must each have an individual purpose and produce the best possible data for that purpose, all while supporting the overall goal of the survey.

A question must also must be phrased in a way that is easy for all your respondents to understand, and does not produce false results.

To do this, remember the following principles:

Get into the respondent's head

The process for a respondent answering a survey question looks like this:

  • The respondent reads the question and determines what information they need to answer it.
  • They search their memory for that information.
  • They make judgments about that information.
  • They translate that judgment into one of the answer options you’ve provided. This is the process of taking the data they have and matching that information with the question that’s asked.

When wording questions, make sure the question means the same thing to all respondents. Words should have one meaning, few syllables, and the sentences should have few words.

Only use the words needed to ask your question and not a word more .

Note that it’s important that the respondent understands the intent behind your question.

If they don’t, they may answer a different question and the data can be skewed.

Some contextual help text, either in the introduction to the questionnaire or before the question itself, can help make sure the respondent understands your goals and the scope of your research.

Use mutually exclusive responses

Be sure to make your response categories mutually exclusive.

Consider the question:

What is your age?

Respondents that are 31 years old have two options, as do respondents that are 40 and 55. As a result, it is impossible to predict which category they will choose.

This can distort results and frustrate respondents. It can be easily avoided by making responses mutually exclusive.

The following question is much better:

This question is clear and will give us better results.

Ask specific questions

Nonspecific questions can confuse respondents and influence results.

Do you like orange juice?

  • Like very much
  • Neither like nor dislike
  • Dislike very much

This question is very unclear. Is it asking about taste, texture, price, or the nutritional content? Different respondents will read this question differently.

A specific question will get more specific answers that are actionable.

How much do you like the current price of orange juice?

This question is more specific and will get better results.

If you need to collect responses about more than one aspect of a subject, you can include multiple questions on it. (Do you like the taste of orange juice? Do you like the nutritional content of orange juice? etc.)

Use a variety of question types

If all of your questionnaire, survey or poll questions are structured the same way (e.g. yes/no or multiple choice) the respondents are likely to become bored and tune out. That could mean they pay less attention to how they’re answering or even give up altogether.

Instead, mix up the question types to keep the experience interesting and varied. It’s a good idea to include questions that yield both qualitative and quantitative data.

For example, an open-ended questionnaire item such as “describe your attitude to life” will provide qualitative data – a form of information that’s rich, unstructured and unpredictable. The respondent will tell you in their own words what they think and feel.

A quantitative / close-ended questionnaire item, such as “Which word describes your attitude to life? a) practical b) philosophical” gives you a much more structured answer, but the answers will be less rich and detailed.

Open-ended questions take more thought and effort to answer, so use them sparingly. They also require a different kind of treatment once your survey is in the analysis stage.

3. Pre-test your questionnaire

Always pre-test a questionnaire before sending it out to respondents. This will help catch any errors you might have missed. You could ask a colleague, friend, or an expert to take the survey and give feedback. If possible, ask a few cognitive questions like, “how did you get to that response?” and “what were you thinking about when you answered that question?” Figure out what was easy for the responder and where there is potential for confusion. You can then re-word where necessary to make the experience as frictionless as possible.

If your resources allow, you could also consider using a focus group to test out your survey. Having multiple respondents road-test the questionnaire will give you a better understanding of its strengths and weaknesses. Match the focus group to your target respondents as closely as possible, for example in terms of age, background, gender, and level of education.

Note: Don't forget to make your survey as accessible as possible for increased response rates.

Questionnaire examples and templates

There are free questionnaire templates and example questions available for all kinds of surveys and market research, many of them online. But they’re not all created equal and you should use critical judgement when selecting one. After all, the questionnaire examples may be free but the time and energy you’ll spend carrying out a survey are not.

If you’re using online questionnaire templates as the basis for your own, make sure it has been developed by professionals and is specific to the type of research you’re doing to ensure higher completion rates. As we’ve explored here, using the wrong kinds of questions can result in skewed or messy data, and could even prompt respondents to abandon the questionnaire without finishing or give thoughtless answers.

You’ll find a full library of downloadable survey templates in the Qualtrics Marketplace , covering many different types of research from employee engagement to post-event feedback . All are fully customizable and have been developed by Qualtrics experts.

Qualtrics // Experience Management

Qualtrics, the leader and creator of the experience management category, is a cloud-native software platform that empowers organizations to deliver exceptional experiences and build deep relationships with their customers and employees.

With insights from Qualtrics, organizations can identify and resolve the greatest friction points in their business, retain and engage top talent, and bring the right products and services to market. Nearly 20,000 organizations around the world use Qualtrics’ advanced AI to listen, understand, and take action. Qualtrics uses its vast universe of experience data to form the largest database of human sentiment in the world. Qualtrics is co-headquartered in Provo, Utah and Seattle.

Related Articles

December 20, 2023

Top market research analyst skills for 2024

November 7, 2023

Brand Experience

The 4 market research trends redefining insights in 2024

September 14, 2023

How BMG and Loop use data to make critical decisions

August 21, 2023

Designing for safety: Making user consent and trust an organizational asset

June 27, 2023

The fresh insights people: Scaling research at Woolworths Group

June 20, 2023

Bank less, delight more: How Bankwest built an engine room for customer obsession

June 16, 2023

How Qualtrics Helps Three Local Governments Drive Better Outcomes Through Data Insights

April 1, 2023

Academic Experience

How to write great survey questions (with examples)

Stay up to date with the latest xm thought leadership, tips and news., request demo.

Ready to learn more about Qualtrics?

BRIEF RESEARCH REPORT article

Validation of the writing strategies questionnaire in the context of primary education: a multidimensional measurement model.

\nOlga Arias-Gundín

  • 1 Department of Psychology, Sociology and Philosophy, University of Leon, Leon, Spain
  • 2 Ponferrada Associated Centre, National University of Distance Education (UNED), Leon, Spain
  • 3 Research Institute for Child Development and Education, University of Amsterdam, Amsterdam, Netherlands

Research has shown that writers seem to follow different writing strategies to juggle the high cognitive demands of writing. The use of writing strategies seems to be an important cognitive writing-related variable which has an influence on students' writing behavior during writing and, therefore, on the quality of their compositions. Several studies have tried to assess students' writing preferences toward the use of different writing strategies in University or high-school students, while research in primary education is practically non-existent. The present study, therefore, focused on the validation of the Spanish Writing Strategies Questionnaire (WSQ-SP), aimed to measure upper-primary students' preference for the use of different writing strategies, through a multidimensional model. The sample comprised 651 Spanish upper-primary students. Questionnaire data was explored by means of exploratory (EFA) and confirmatory (CFA) factor analysis. Through exploratory factor analysis four factors were identified, labeled thinking, planning, revising, and monitoring, which represent different writing strategies. The confirmatory factor analysis confirmed the adequacy of the four-factor model, with a sustainable model composed of the four factors originally identified. Based on the analysis, the final questionnaire was composed of 16 items. According to the results, the Spanish version of the Writing Strategies Questionnaire (WSQ-SP) for upper-primary students has been shown to be a valid and reliable instrument, which can be easily applied in the educational context to explore upper-primary students' writing strategies.

Introduction

Writing has been defined as a problem-solving task that places multiple cognitive demands on the writer ( Hayes, 1996 ). As Flower and Hayes indicated in the first cognitive model of writing ( Flower and Hayes, 1980 ), writers have to manage several cognitively costly processes such as planning what to say, translating and transcribing those plans into written text, and revising either the plans or the written text ( Alamargot and Chanquoy, 2001 ; Hayes, 2012 ). The use of these processes, especially in young writers, in whom basic transcription skills are not yet automated ( Pontart et al., 2013 ; Alves et al., 2016 ; Limpo et al., 2017 ; Llaurado and Dockrell, 2020 ), consumes much of the capacity of their working memory as these processes recursively interact during composition ( McCutchen, 2011 ).

Following a comprehensive literature review, Graham and Harris (2000) concluded that writing development seems to depend on the automation of transcription skills and the acquisition of high-levels of self-regulation in order to handle high-level processes such as planning and revision. Self-regulation, represented by the use of writing strategies, is a critical aspect of writing as it enables writers to achieve their writing goals ( Zeidner et al., 2000 ; Santangelo et al., 2016 ; Puranik et al., 2019 ). These strategies may reduce cognitive overload as they allow writers to divide, sequence, and regulate the attention paid to the different writing processes ( Kieft et al., 2006 ; Beauvais et al., 2011 ). Empirical research has shown that writers' strategic behavior during composition strongly predicts the quality of “novices” and “experts” texts ( Beauvais et al., 2011 ; Graham et al., 2017a , 2019 ; Wijekumar et al., 2019 ). Accordingly, the use of writing strategies has been generally considered to be a critical individual writing-related variable ( Kieft et al., 2008 ), and is a major focus of research in writing instruction ( Harris et al., 2010 ; Graham and Harris, 2018 ) from the earliest stages of education ( Arrimada et al., 2019 ). Exploring students' use of different writing strategies during composition seems to be a critical aspect and should be considered in the fields of writing and writing instructional research.

Several studies have attempted to explore how writers differ in the use of different writing strategies ( Torrance et al., 1994 , 1999 , 2000 ; Biggs et al., 1999 ; Lavelle et al., 2002 ; Kieft et al., 2006 , 2007 , 2008 ). These studies identified two main writing strategies, related with the processes identified in the first seminal cognitive model of writing ( Flower and Hayes, 1980 ), such as planning and revising. According to these studies, writers who follow a planning strategy tend to plan before beginning to write, whereas writers who prefer the revising strategy tend to plan by writing a rough draft first and then revising it. Despite the high-value of these studies, it is important to note that they only focused on analyzing the writing strategies in undergraduate ( Torrance et al., 1994 , 1999 , 2000 ; Biggs et al., 1999 ; Lavelle et al., 2002 ; Arias-Gundín and Fidalgo, 2017 ; Robledo Ramón et al., 2018 ) and secondary-school students ( Kieft et al., 2006 , 2008 ). To our knowledge, just one study has explored the use of different writing strategies with upper-primary Flemish students ( De Smedt et al., 2018 ). In this study, the authors implemented the Writing Strategies Questionnaire initially developed by Kieft et al. (2006 , 2008) and identified four factors by means of exploratory and confirmatory factor analysis which were labeled thinking, planning, revising and controlling. The planning and revising strategies were consistent with those identified in previous studies with secondary school students ( Kieft et al., 2006 , 2008 ). However, in that study the authors found two additional factors. The controlling factor was defined as students' tendency to check the content or structure of their text, whereas the thinking factor make reference to the extent to which students first think about the content of their text and about their writing approach before they start writing. Thus, according to this study, it seems to be that the questionnaire assesses writing strategies in a more comprehensive way than initially intended by Kieft et al. (2006 , 2008) .

Additionally, it is important to consider that in all the previously reported studies, data were collected independently of the writing task through questionnaires, which may have led to biases due to self-reported estimates of writing strategies ( Fidalgo and García, 2009 ). However, it is difficult to think of a feasible alternative for exploring writing strategies which would allow researchers to collect data from a representative sample size. Therefore, it is vitally important to conduct studies to explore the psychometric properties and the validity of these questionnaires. The advantages of exploring these aspects of the Writing Strategies Questionnaire would be the possibility of capturing students' strategy preferences non-intrusively, exploring some aspects that remain unclear about writing style (i.e., stability), and the possibility of comparing student outcomes according to their writing strategy preference in intervention studies as one key individual feature of writers at different ages ( Kieft et al., 2008 ).

Therefore, the main goal of the present study is to analyze the factor structure and validity of a Spanish version of the Writing Strategies Questionnaire (WSQ-SP) ( Kieft et al., 2006 , 2008 ) implemented with Spanish upper-primary students, analyzing the adjustment of the factorial model proposed based on the scientific literature ( De Smedt et al., 2018 ), which consists of four interrelated factors taking into account the recursive nature of the writing process: Thinking, Planning, Revision, and Monitoring (see Figure 1 ). Additionally, the traditional two-factor model initially found ( Kieft et al., 2006 , 2008 ) will also be explored to test which is the most appropriate scale structure for the questionnaire.

www.frontiersin.org

Figure 1 . Hypothesized model of the factor structure of the WSQ-SP, composed of four interrelated factors.

Moreover, a second goal of the study is to analyze the factorial invariance of the proposed model by considering different variables such as gender and grade.

Materials and Methods

Participants.

The sample comprised 651 Spanish primary school students in 16 fourth-grade ( N = 178, 27%), 16 fifth-grade ( N = 246; 38%), and 14 sixth-grade classes ( N = 227; 35%). Students' ages ranged from 9 to 13 (Mage = 9.5 years, SD = 0.55 for fourth graders; Mage = 10.4 years, SD = 0.52 for fifth graders; Mage = 11.5 years, SD = 0.54 for sixth graders) and with similar proportions of boys and girls (47.19% girls in 4th grade; 48.37% girls in 5th grade; 55.07% girls in 6th grade). The students came from seven public and four semi-private schools in the city of Ponferrada, finding students from families with a high diversity of socio-economic status. However, it should be noted that most students came from families with medium to high incomes.

The criterion for choosing the participants of the study was that they should be students in 5th or 6th grade of elementary education and that Spanish should be their first language. Students in their final years of primary education were considered for developmental reasons. According to the studies of Berninger et al. (1992 , 1994 , 1996) , planning and revision skills appear progressively during the primary education stage, with the last processes appearing in the last grades (5th and 6th). Additionally, although students with learning disabilities participated in the study, their data was not considered for the analysis. This was done on the basis of previous studies, which have shown differences in the use of high-level cognitive processes between upper-primary students with and without learning disabilities ( García and Fidalgo, 2008 ; Graham et al., 2017b ).

Prior to the implementation of the study, consent was requested from the Consejería de Educación de Castilla y León [Regional Department of Education of Castilla and Leon], the autonomous community in which the study was carried out. Once the study was approved by the expert committee of the regional department of Education, the researchers contacted all the schools in Ponferrada and surrounding areas. Subsequently, a meeting was held with the heads of the schools to inform them in detail about the study and the procedure to be followed during its execution. Those schools that decided to participate in the study sent the parents an information letter in which the research aims were presented, asking them for informed consent for their children to participate in the study. They were given the opportunity to express concerns and to request that their children's data not be included in the study. Following that, the study was undertaken with participation from only those students whose parents had given informed consent. The study was conducted following the Code of Ethics of the World Medical Association (Declaration of Helsinki) ( Williams, 2008 ).

Data was collected in a natural context within regular Spanish language classes. Students were asked to complete the Spanish WSQ and writing a narrative text in a 50-min session. The questionnaire was administered by one of the researchers in this study who has a degree in Psychology and experience in administering similar kind of tests. Additionally, she received specific training on the implementation of the questionnaire. Moreover, the assessment session was audio-recorded to make sure that the assessment procedure occurred as intended.

Students' Writing Strategies

In this study, we began with the 26-item questionnaire measuring students' writing strategies that has been used in previous studies ( Kieft et al., 2006 , 2008 ). Students rate their agreement with each item on a five-point scale (1–5).

For the translation of the questionnaire, we combined direct and inverse translation of the items. The questionnaire was translated from Dutch to English by a Dutch researcher who was also fluent in English and Spanish. Then this researcher and a member of the Spanish team each separately translated the English version into Spanish, in order to compare the two versions. The two Spanish translations were compared and discussed, looking for possible discrepancies.

Following that, an expert-panel assessed the suitability of the questionnaire according to the age of the target population. This panel of experts was made up of five schoolteachers with extensive experience in education (three in primary education, one in early childhood education and one in special needs education). Some changes were made to the wording to improve the understanding of the meaning of some items.

The first version of the questionnaire was then trialed with a small sample of upper-primary students to identify possible mistakes and assess general understanding. Students had no issues with it, hence no changes were made, and this produced the final version of the questionnaire (see Supplementary Material ).

Data Analysis

In order to explore the psychometric properties of the questionnaire, we first analyzed the normal distribution of each item, verifying that they gave kurtosis and skewness indices between ±7 and ±3, respectively ( Kline, 2011 ). The magnitude and direction of the relationship between items was also analyzed using Pearson's correlation coefficient.

The validity of the factor structure was analyzed in two steps. First, we conducted exploratory factor analysis (EFA) with the aim of determining whether the items saturated the two factors of the original version or the four factors proposed in the present study (see Figure 1 ). Second, we performed a confirmatory factor analysis (CFA).

The maximum likelihood method was used to estimate the model using the covariance matrix of the items in order to analyze the fit of the proposed model. In order to investigate the model's goodness of fit, a number of statistics were considered: (a) absolute indices such as the Chi-square ratio and degrees of freedom ( X 2 /df ) and the goodness-of-fit index (GFI); (b) the comparative fit index (CFI) as an incremental fit index; (c) the adjusted goodness-of-fit index (AGFI) and the root mean square error of approximation (RMSEA) as parsimony adjustment indices. The goodness-of-fit of the model was assessed according to the following rules: (a) the X 2 /df ratio is <3; (b) values above 0.90 for the goodness-of-fit index (GFI), comparative goodness-of-fit index (CFI) and adjusted goodness-of-fit index (AGFI) are acceptable; (c) values below 0.08 for the root mean square error of approximation (RMSEA) indicate acceptable model fit ( Browne and Cudeck, 1993 ; Hoyle, 1995 ; Kline, 1998 ; Hu and Bentler, 1999 ; Valdés et al., 2019 ).

Finally, the factorial invariance of the proposed model was analyzed by testing the fit of the model using confirmatory factor analysis (CFA) and composite reliability considering the variables gender and school year.

First, the results of the exploratory factor analysis (EFA) of the WSQ-SP are provided in order to check the factor structure of the proposed model, as well as the loading of the items on each of the factors. Second, the results of the confirmatory factor analysis (CFA) are presented showing the fit of the proposed model, as well as a comparison with the traditional two-dimensional model. Finally, the results are presented with respect to the factorial invariance of the WSQ-SP questionnaire considering gender and grade.

Exploratory Factor Analysis (EFA)

All of the items exhibited values within the range of normal distribution (asymmetry: ranging between −1.37 and 1.20; kurtosis: raging between −0.96 and 0.82), hence the hypothesis of univariate normality was rejected in all cases ( Kline, 2011 ).

An exploratory factor analysis (EFA) was carried out using the Maximum Likelihood extraction method and Oblimin rotation. The data showed a good fit for this kind of model, evidenced by Bartlett's sphericity test (χ 2 (171) = 2216.68, p < 0.000) and the Kaiser-Meyer-Olkin (KMO) value of 0.83 ( Lloret-Segura et al., 2014 ). As a criterion for item inclusion, factor weights >0.30 were considered for only one of the factors, reflecting the theoretical soundness of the scale ( Hair et al., 1999 ). Ten items were excluded because they did not match the different factors (items 5, 6, 7, 9, 10, 12, 15, 16, 20, and 24). The results showed that the 16 items of the scale are grouped into four factors which were theoretically identified and retained. These factors were labeled revising, monitoring, thinking, and planning and together explain 32.08% of the variance. The first factor, monitoring , corresponds to how much students checked the content or structure of their text during composition. This factor consisted of six items explaining 18.0% of the variance and had a composite reliability of 0.82. The second factor, revising , is related to how much students revised the content of their text once the text was written. This factor included three items explaining 7.9% of the variance and had a composite reliability of 0.85. The third factor, planning , is related to how much students thought about the content of their text in advance, using external planning devices such as a draft sheet. This factor included three items explaining 3.8% of the variance and had a composite reliability of 0.75. Finally, the fourth factor, thinking , corresponds to how much students needed to have a clear idea of the content or structure of the text in their minds before they started to write. This factor consisted of four items explaining 2.4% of the variance and a composite reliability of 0.79 (see Table 1 ).

www.frontiersin.org

Table 1 . Exploratory factor analysis (EFA) of the WSQ-SP.

Confirmatory Factor Analysis (CFA)

We performed CFA for the 16 items in the WSQ-SP using Amos software in SPSS. We used Maximum Likelihood (ML) factor analysis with the CFA command. The results of the CFA suggest that overall, the model had a good fit to the data according to the indices (χ 2 / df = 2.23; GFI = 0.96; AGFI = 0.95; CFI = 0.93; RMSEA = 0.04 CI (0.03–0.05).

The values of the regression coefficients suggest that the factors explained an acceptable part of the variance of the items (see Figure 2 ). The correlation between the factors indicated that the factors were related but did not present problems of collinearity.

www.frontiersin.org

Figure 2 . Path diagram of the hypothesized model. Confirmatory Factor Analysis of the questionnaire.

Considering that the proposed model was corroborated by the results, it was compared with the traditional two-dimensional structure identified in previous studies ( Kieft et al., 2006 , 2008 ). The model proposed in this study exhibited the best factorial fit (see Table 2 ).

www.frontiersin.org

Table 2 . Goodness of fit indices for each model of the CFA of the WSQ-SP ( N = 651).

Factor Invariance Analysis

To check that the effectiveness of the model was not significantly affected by the features of the sample, the proposed model was subjected to CFA by selecting the sample based on gender and grade. These two variables were chosen for the following reasons. Gender was considered because some studies have shown it to be a variable that can influence student learning and achievement in general (e.g., Reilly et al., 2019 ) and specifically in the use of cognitive writing strategies (e.g., Berninger et al., 1992 ; Jones, 2011 ). Additionally, grade was chosen because it is during this period of schooling that higher-level cognitive processes related to textual planning and revision appear following different rates of development ( Berninger et al., 1992 , 1994 , 1996 ). The aim was to ensure that the questionnaire is reliable regardless of gender or grade.

As Table 3 shows, the composite reliability of each factor in all of the proposed models, based on the characteristics of the sample and their combinations, is high (ranging between: 0.81 and 0.92 for the monitoring factor; 0.70 and 0.93 for the thinking factor; 0.70 and 0.82 for the planning factor; and 0.81 and 0.91 for the revising factor). The model shows a good overall fit for the gender and grade variables, with the indicators meeting the established parameters. There was just one exception for the adjusted goodness-of-fit index (AGFI) in the case of 4th grade students (0.88), which was very close to the desired value (0.90). When the model was analyzed based on the interaction of gender and grade, the absolute index, Chi-square ratio and degrees of freedom, and the RMSEA as the parsimony adjustment index, demonstrated acceptable model fit, with the remaining indicators being close to the desired value (0.90). However, it is important to note that when the model was analyzed based on gender-grade interaction, the sample shrank considerably. This influenced the results, given that CFA is sensitive to sample size. The literature recommends performing CFA analysis with samples of more than 200 participants ( Valdés et al., 2019 ). In all of the cases analyzing the model with samples smaller than 200 students, some indicators did not give the desired values, as Table 3 shows.

www.frontiersin.org

Table 3 . Goodness-of-Fit Indices for the proposed model of the questionnaire based on sample features.

The main goal of the present study was to analyze the factor structure and validity of the Spanish WSQ-SP with upper-primary students. An additional goal was to analyze the factorial invariance of the proposed model by considering different variables such as gender and grade.

With regard to the first goal of the study, the results relating to the questionnaire's factor structure were in line with the previous study carried out with Flemish upper-primary students ( De Smedt et al., 2018 ) in which four factors were identified; planning, revising, monitoring and thinking. In addition, on comparing this model with the two-factor model (i.e., planning and revising), generally identified in previous studies with more expert writers ( Kieft et al., 2006 , 2008 ), the four-factor model demonstrated a better match with the questionnaire structure.

This four-factor model is consistent with the differentiation of planning and revision processes that have generally been considered in terms of their occurrence during the process of writing a text ( Berninger et al., 1994 ). As planning and revision can occur before or during translating, a distinction was made between advanced and online planning, post-translation and online revision. In this way, the thinking and planning factors were related to the two different, but complementary, ways of planning. According to previous studies, writers differ in how they plan. While some writers make an outline in note form before drafting, others plan without producing an outline. This latter form of planning has been called “mental planning” ( Kellogg, 1988 ; Torrance et al., 2000 ). Thus, the thinking factor would correspond to mental planning while the planning factor would correspond to outline planning. Similarly, the revising and monitoring factors can be interpreted according to when revision occurs. According to Berninger and Swanson (1994) considering the timing of revision it is possible to differentiate between online revision (i.e., revision that takes place during composition) and post-translation revision (i.e., revision that takes place after composition). Thus, the revising factor would correspond with post-translation revision while the monitoring factor would correspond with online revision. In other words, the results of the present study indicate that the questionnaire is not only exploring students' use of planning and revising strategies in a general way, but rather also assessing different types of planning and revision strategies depending on when they take place when students are writing a text. These results are in line with the arguments presented by Kieft et al. (2007) and Tillema et al. (2011) , who pointed out that the revising scale was composed not only of items related to post-translation revision but also to monitoring. Moreover, the better fit of the four-factor model can be explained based on the fact that these processes seem to have different rates of development ( Berninger et al., 1992 , 1994 , 1996 ). Based on the implementation of cross-sectional studies with students aged between 6 and 15 years old, the authors found that online planning and revision seems to appear at around ages 6–9 (1st−3rd grades). The authors also found that advanced planning and post-translation revision were the last processes to appear around the last years of primary school (ages 9–12; 4th−6th grades). This would clearly explain why the four-factor model has a better fit to the data from primary school pupils. Here, it is also important to consider that the four factors were shown to exhibit correlation but no problems of collinearity were found. This result is in line the view of writing as a recursive activity in which one process may interrupt others during composition ( Flower and Hayes, 1980 ).

In terms of the second goal of the study, analyzing the factorial invariance of the proposed model by considering different variables such as gender and grade, the results showed that the questionnaire structure was independent of the feature of the sample. The results of the present study seem to be generalizable to upper-primary students regardless of gender or grade.

In summary, the major contribution of this study is the validation of the WSQ-SP with upper-primary students, as validation is a critical step for the development of reliable measurement tools in all scientific domains ( Muñiz and Fonseca-Pedrero, 2019 ). From this study, we can conclude that the questionnaire provides more precise information than initially expected and it is a suitable tool for easily, reliably assessing upper-primary students' writing.

The validation of this questionnaire is a first step toward a reliable analysis of this variable, which will continue with analyzing aspects that have not yet been investigated, such as its stability, the moderating effect it has on writing intervention in upper-primary students ( Kieft et al., 2006 , 2008 ), and the effect of instruction itself on writing. Having a validated questionnaire will also make it possible to analyze the relationship between students' use of strategies and other important writing-related variables such as reading ( Fidalgo et al., 2014 ; Qin and Liu, 2021 ), motivation ( Rocha et al., 2019 ), and students' knowledge ( Wijekumar et al., 2019 ). It would also be interesting to analyze the relationship between the results provided by this scale and the writing processes students follow through the use of online measures such as the triple task ( García and Fidalgo, 2008 ; Fidalgo et al., 2014 ) and thinking aloud ( López et al., 2019 ).

Finally, as an educational contribution, this instrument may be a useful tool that will help provide teachers with information about their students' strategies and consequently help them to adapt the writing instruction according to their needs. All of this, without a doubt, will have a positive impact on students' writing performance, not only in initial educational levels (e.g., López et al., 2017 ), but also in later educational stages, such as at University, where students often find it difficult to write academic texts ( Connelly et al., 2005 ).

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors, under request to the corresponding author, without undue reservation.

Ethics Statement

The studies involving human participants were reviewed and approved by Consejería de Educación de Castilla y León. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.

Author Contributions

All authors listed have made a substantial, direct and intellectual contribution to the design of the work, analysis and data interpretation, drafting and revising it critically, and approved it for publication.

This research has been supported by the Spanish Ministry of Economy and Competitiveness through a project EDU2015-67484-P (MINECO/FEDER).

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

We would like to thank staff and students at the Peñalba, Flores del Sil, Espíritu Santo, San Antonio, Jesús Maestro, La Borreca, Navaliegos, Valentín García Yebra, Concepcionistas, Asunción, and San Ignacio schools for their assistance in completing this study.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2021.700770/full#supplementary-material

Alamargot, D., and Chanquoy, L. (2001). Studies in Writing Series: Vol. 9. Through the Models of Writing. Dordrecht: Kluwer Academic. doi: 10.1007/978-94-010-0804-4

CrossRef Full Text

Alves, R. A., Limpo, T., Fidalgo, R., Carvalhais, L., Pereira, L. Á., and Castro, S. L. (2016). The impact of promoting transcription on early text production: effects on bursts and pauses, levels of written language, and writing performance. J. Educ. Psychol. 108, 665–679. doi: 10.1037/edu0000089

CrossRef Full Text | Google Scholar

Arias-Gundín, O., and Fidalgo, R. (2017). El perfil escritor como variable moduladora de los procesos involucrados en la composición escrita en estudiantes universitarios [Writer profile as a modulating variable of processes involved in written composition in undergraduate students]. EJIHPE 7, 59–68. doi: 10.30552/ejihpe.v7i1.195

Arrimada, M., Torrance, M., and Fidalgo, R. (2019). Effects of teaching planning strategies to first-grade writers. Br. J. Educ. Psychol . 89, 670–688. doi: 10.1111/bjep.12251

PubMed Abstract | CrossRef Full Text | Google Scholar

Beauvais, C., Olive, T., and Passerault, J. M. (2011). Why are some texts good and others not? Relationship between text quality and management of the writing processes. J. Educ. Psychol. 103, 415–428. doi: 10.1037/a0022545

Berninger, V. W., Cartwright, A. C., Yates, C. M., Swanson, H. L., and Abbott, R. D. (1994). Developmental skills related to writing and reading acquisition in the intermediate grades: shared and unique functional systems. Read. Writ. 6, 161–196. doi: 10.1007/BF01026911

Berninger, V. W., and Swanson, H. L. (1994). “Modifying Hayes and Flower's model of skilled writing to explain beginning and developing writing,” in Children's Writing: Toward a Process Theory of the Development of Skilled Writing, Vol. 2 , ed E. C. Butterfield (Greenwich, CT: JAI Press), 57–81.

Berninger, V. W., Whitaker, D., Feng, Y., Swanson, H. L., and Abbott, R. D. (1996). Assessment of planning, translating, and revising in junior high writers. J. Sch. Psychol. 34, 23–52. doi: 10.1016/0022-4405(95)00024-0

Berninger, V. W., Yates, C. M., Cartwright, A. C., Rutberg, J., Remy, E., and Abbott, R. D. (1992). Lower-level developmental skills in beginning writing. Read. Writ. 4, 257–280. doi: 10.1007/BF01027151

Biggs, J., Lai, P., Tang, C., and Lavelle, E. (1999). Teaching writing to ESL graduate students. A model and an illustration. Br. J. Educ. Psychol . 69, 293–306. doi: 10.1348/000709999157725

Browne, M., and Cudeck, R. (1993). “Alternative ways of assessing model fit,” in Testing Structural Equation Models , eds K. A. Bollen and J. S. Long (Newbury Park, CA: SAGE Publicatons), 136–162.

Google Scholar

Connelly, V., Dockrell, J. E., and Barnett, J. (2005). The slow handwriting of undergraduate students constrains overall performance in exam essays. Educ. Psychol . 25, 99–107. doi: 10.1080/0144341042000294912

De Smedt, F., Merchie, E., Barendse, M., Rosseel, Y., De Naeghel, J., and Van Keer, H. (2018). Cognitive and motivational challenges in writing: studying the relation with writing performance across students' gender and achievement level. Read. Res. Q . 53, 249–272. doi: 10.1002/rrq.193

Fidalgo, R., and García, J. N. (2009). La evaluación de la metacognición en la composición escrita [Evaluating metacognition in written composition]. Estud. Psicol. 30, 51–72. doi: 10.1174/021093909787536290

Fidalgo, R., Torrance, M., Arias-Gundín, O., and Martínez-Cocó, B. (2014). Comparison of reading-writing patterns and performance of students with and without reading difficulties. Psicothema 26, 442–448. doi: 10.7334/psicothema2014.23

Flower, L., and Hayes, J. R. (1980). “The dynamics of composing: making plans and juggling constraints,” in Cognitive Processes in Writing , eds L. W. Gregg and E. R. Steinberg (Hillsdale, NJ: Lawrence Erlbaum Associates), 31–49.

García, J. N., and Fidalgo, R. (2008). Orchestration of writing processes and writing products: a comparison of sixth-grade students with and without learning disabilities. Learn. Disabil. Contemp. J . 6, 77–98.

Graham, S., Collins, A. A., and Rigby-Wills, H. (2017a). Writing characteristics of students with learning disabilities and typically achieving peers: a meta-analysis. Except. Child. 83, 199–218. doi: 10.1177/0014402916664070

Graham, S., and Harris, K. (2018). “Evidence-based writing practices: a meta-analysis of existing meta-analysis,” in Design Principles for Teaching Effective Writing: Theoretical and Empirical Grounded Principles , eds R. Fidalgo, K. Harris, and M. Braaksma (Leiden: Brill Editions), 13–37. doi: 10.1163/9789004270480_003

Graham, S., and Harris, K. R. (2000). The role of self-regulation and transcription skills in writing and writing development. Educ. Psychol. 35, 3–12. doi: 10.1207/S15326985EP3501_2

Graham, S., Harris, K. R., Fishman, E., Houston, J., Wijekumar, K., Lei, P. W., et al. (2019). Writing skills, knowledge, motivation, and strategic behavior predict students' persuasive writing performance in the context of robust writing instruction. Element. Sch. J . 119, 487–510. doi: 10.1086/701720

Graham, S., Harris, K. R., Kiuhara, S. A., and Fishman, E. J. (2017b). The relationship among strategic writing behavior, writing motivation, and writing performance with young, developing writers. Element. Sch. J . 118, 82–104. doi: 10.1086/693009

Hair, J., Anderson, R., Tatham, R., and Black, W. (1999). Análisis multivariante, 5 Edn . Madrid: Prentice Hall.

Harris, K. R., Santangelo, T., and Graham, S. (2010). “Metacognition and strategies instruction in writing,” in Metacognition, Strategy Use, and Instruction , eds H. S. Waters and W. Schneider (New York, NY: The Guilford Press), 226–256.

Hayes, J. R. (1996). “A new framework for understanding cognition and affect in writing,” in The Science of Writing: Theories, Methods, Individual Differences, and Applications , eds C. M. Levy and S. Ransdell (Mahwah, NJ: Lawrence Erlbaum Associates), 1–27.

Hayes, J. R. (2012). “Evidence from language bursts, revision, and transcription for translation and its relation to other writing processes,” in Translation of Thought to Written Text While Composing: Advancing Theory, Knowledge, Research, Methods, Tools, and Applications , eds M. Fayol, D. Alamargot, and V. W. Berninger (New York, NY: Psychology Press), 15–25.

Hoyle, R. H. (1995). Structural Equation Modeling: Concepts, Issues, and Applications . Thousand Oaks, CA: Sage.

Hu, L. T., and Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct. Equat. Model. 6, 1–55. doi: 10.1080/10705519909540118

Jones, S. (2011). Mapping the landscape: gender and the writing classroom. J. Writ. Res . 3, 161–179. doi: 10.17239/jowr-2012.03.03.2

Kellogg, R. T. (1988). Attentional overload and writing performance: effects of rough draft and outline strategies. J. Exp. Psychol. 14, 355–365. doi: 10.1037/0278-7393.14.2.355

Kieft, M., Rijlaarsdam, G., Galbraith, D., and Van den Bergh, H. (2007). The effects of adapting a writing course to students' writing strategies. Br. J. Educ. Psychol . 77, 565–578. doi: 10.1348/096317906X120231

Kieft, M., Rijlaarsdam, G., and Van den Bergh, H. (2006). Writing as a learning tool: testing the role of students' writing strategies. Eur. J. Psychol. Educ . 12, 17–34. doi: 10.1007/BF03173567

Kieft, M., Rijlaarsdam, G., and van den Bergh, H. (2008). An aptitude-treatment interaction approach to writing-to-learn. Learn. Instruct . 18, 379–390. doi: 10.1016/j.learninstruc.2007.07.004

Kline, P. (1998). The New Pychometrics: Science, Psychology, and Measurement . London: Psychology Press.

Kline, R. B. (2011). “Convergence of structural equation modeling and multilevel modeling,” in Handbook of Methodological Innovation in Social Research Methods , eds M. Williams and W. P. Vogt (London: Sage), 562–589. doi: 10.4135/9781446268261.n31

Lavelle, E., Smith, J., and O'Ryan, L. (2002). The writing approaches of secondary students. Br. J. Educ. Psychol. 72, 399–418. doi: 10.1348/000709902320634564

Limpo, T., Alves, R. A., and Connelly, V. (2017). Examining the transcription-writing link: effects of handwriting fluency and spelling accuracy on writing performance via planning and translating in middle grades. Learn. Individ. Diff . 53, 26–36. doi: 10.1016/j.lindif.2016.11.004

Llaurado, A., and Dockrell, J. E. (2020). The impact of orthography on text production in three languages: catalan, English, and Spanish. Front. Psychol . 11:878. doi: 10.3389/fpsyg.2020.00878

Lloret-Segura, S., Ferreres-Traver, A., Hernández-Baeza, A., and Tomás-Marco, I. (2014). El análisis factorial exploratorio de los ítems: una guía práctica, revisada y actualizada. Ann. Psychol . 30, 1151–1169. doi: 10.6018/analesps.30.3.199361

López, P., Torrance, M., and Fidalgo, R. (2019). The online management of writing processes and their contribution to text quality in upper-primary students. Psicothema 31, 311–318. doi: 10.7334/psicothema2018.326

López, P., Torrance, M., Rijlaarsdam, G., and Fidalgo, R. (2017). Effects of direct instruction and strategy modeling on upper-primary students' writing development. Front. Psychol . 8:1054. doi: 10.3389/fpsyg.2017.01054

McCutchen, D. (2011). From novice to expert: implications of language skills and writing-relevant knowledge for memory during the development of writing skill. J. Writ. Res . 3, 51–68. doi: 10.17239/jowr-2011.03.01.3

Muñiz, J., and Fonseca-Pedrero, E. (2019). Diez pasos para la construcción de un test [Ten steps for test development]. Psicothema 31, 7–16. doi: 10.18682/pd.v1i1.854

Pontart, V., Bidet-Ildei, C., Lambert, E., Morisset, P., Flouret, L., and Alamargot, D. (2013). Influence of handwriting skills during spelling in primary and lower secondary grades. Front. Psychol . 4:818. doi: 10.3389/fpsyg.2013.00818

Puranik, C. S., Boss, E., and Wanless, S. (2019). Relations between self-regulation and early writing: domain specific or task dependent?. Early Child. Res. Q. 46, 228–239. doi: 10.1016/j.ecresq.2018.02.006

Qin, J., and Liu, Y. (2021). The influence of reading texts on L2 reading-to-write argumentative writing. Front. Psychol. 12:655601. doi: 10.3389/fpsyg.2021.655601

Reilly, D., Neumann, D. L., and Andrews, G. (2019). Gender differences in reading and writing achievement: evidence from the National Assessment of Educational Progress (NAEP). Am. Psychol . 74:445. doi: 10.1037/amp0000356

Robledo Ramón, P., Arias-Gundín, O., Palomo, M., Andina, E., and Rodríguez, C. (2018). Perfil escritor y conocimiento metacognitivo de las tareas académicas en los estudiantes universitarios. Publicaciones 48, 243–270. doi: 10.30827/publicaciones.v48i1.7335

Rocha, R. S., Filipe, M., Magalhães, S., Graham, S., and Limpo, T. (2019). Reasons to write in grade 6 and their association with writing quality. Front. Psychol . 10:2157. doi: 10.3389/fpsyg.2019.02157

Santangelo, T., Harris, K., and Graham, S. (2016). “Self-regulation and writing,” in Handbook of Writing Research, 2nd Edn , eds C. A. MacArthur, S. Graham, and J. Fitzgerald (New York, NY: Guilford Press), 174–193.

Tillema, M., van den Bergh, H., Rijlaarsdam, G., and Sanders, T. (2011). Relating self-reports of writing behaviour and online task execution using a temporal model. Metacogn. Learn . 6, 229–253. doi: 10.1007/s11409-011-9072-x

Torrance, M., Thomas, G. V., and Robinson, E. J. (1994). The writing strategies of graduate research students in the social sciences. High. Educ . 27, 379–392. doi: 10.1007/BF03179901

Torrance, M., Thomas, G. V., and Robinson, E. J. (1999). Individual differences in the writing behaviour of undergraduate students. Br. J. Educ. Psychol. 69, 189–199. doi: 10.1348/000709999157662

Torrance, M., Thomas, G. V., and Robinson, E. J. (2000). Individual differences in undergraduate essay writing strategies. A longitudinal study. High. Educ . 39, 181–200. doi: 10.1023/A:1003990432398

Valdés, A. A., García, F. I., Torres, G. M., Urías, M., and Grijalva, C. S. (2019). Medición en Investigación Educativa con Apoyo del SPSS y el AMOS [Measurement in Educational Research with Support of SPSS and AMOS] . Clave Editorial.

Wijekumar, K., Graham, S., Harris, K. R., Lei, P. W., Barkel, A., Aitken, A., et al. (2019). The roles of writing knowledge, motivation, strategic behaviors, and skills in predicting elementary students' persuasive writing from source material. Read. Writ . 32, 1431–1457. doi: 10.1007/s11145-018-9836-7

Williams, J. R. (2008). Revising the declaration of Helsinki. World Med. J. 54, 120–125.

Zeidner, M., Boekaerts, M., and Pintrich, P. R. (2000). “Self-regulation: directions and challenges for future research,” in Self-Regulation: Theory, Research, and Applications , eds M. Boekaerts, P. R. Pintrich, and M. Zeidner (Orlando, FL: Academic Press), 749–768. doi: 10.1016/B978-012109890-2/50052-4

Keywords: writing strategies, questionnaire, upper-primary education, psychometrics, validity

Citation: Arias-Gundín O, Real S, Rijlaarsdam G and López P (2021) Validation of the Writing Strategies Questionnaire in the Context of Primary Education: A Multidimensional Measurement Model. Front. Psychol. 12:700770. doi: 10.3389/fpsyg.2021.700770

Received: 26 April 2021; Accepted: 17 May 2021; Published: 05 July 2021.

Reviewed by:

Copyright © 2021 Arias-Gundín, Real, Rijlaarsdam and López. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Paula López, plopg@unileon.es

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Questionnaire Design | Methods, Question Types & Examples

Questionnaire Design | Methods, Question Types & Examples

Published on 6 May 2022 by Pritha Bhandari . Revised on 10 October 2022.

A questionnaire is a list of questions or items used to gather data from respondents about their attitudes, experiences, or opinions. Questionnaires can be used to collect quantitative and/or qualitative information.

Questionnaires are commonly used in market research as well as in the social and health sciences. For example, a company may ask for feedback about a recent customer service experience, or psychology researchers may investigate health risk perceptions using questionnaires.

Table of contents

Questionnaires vs surveys, questionnaire methods, open-ended vs closed-ended questions, question wording, question order, step-by-step guide to design, frequently asked questions about questionnaire design.

A survey is a research method where you collect and analyse data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.

Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.

But designing a questionnaire is only one component of survey research. Survey research also involves defining the population you’re interested in, choosing an appropriate sampling method , administering questionnaires, data cleaning and analysis, and interpretation.

Sampling is important in survey research because you’ll often aim to generalise your results to the population. Gather data from a sample that represents the range of views in the population for externally valid results. There will always be some differences between the population and the sample, but minimising these will help you avoid sampling bias .

Prevent plagiarism, run a free check.

Questionnaires can be self-administered or researcher-administered . Self-administered questionnaires are more common because they are easy to implement and inexpensive, but researcher-administered questionnaires allow deeper insights.

Self-administered questionnaires

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or by post. All questions are standardised so that all respondents receive the same questions with identical wording.

Self-administered questionnaires can be:

  • Cost-effective
  • Easy to administer for small and large groups
  • Anonymous and suitable for sensitive topics

But they may also be:

  • Unsuitable for people with limited literacy or verbal skills
  • Susceptible to a nonreponse bias (most people invited may not complete the questionnaire)
  • Biased towards people who volunteer because impersonal survey requests often go ignored

Researcher-administered questionnaires

Researcher-administered questionnaires are interviews that take place by phone, in person, or online between researchers and respondents.

Researcher-administered questionnaires can:

  • Help you ensure the respondents are representative of your target audience
  • Allow clarifications of ambiguous or unclear questions and answers
  • Have high response rates because it’s harder to refuse an interview when personal attention is given to respondents

But researcher-administered questionnaires can be limiting in terms of resources. They are:

  • Costly and time-consuming to perform
  • More difficult to analyse if you have qualitative responses
  • Likely to contain experimenter bias or demand characteristics
  • Likely to encourage social desirability bias in responses because of a lack of anonymity

Your questionnaire can include open-ended or closed-ended questions, or a combination of both.

Using closed-ended questions limits your responses, while open-ended questions enable a broad range of answers. You’ll need to balance these considerations with your available time and resources.

Closed-ended questions

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. Closed-ended questions are best for collecting data on categorical or quantitative variables.

Categorical variables can be nominal or ordinal. Quantitative variables can be interval or ratio. Understanding the type of variable and level of measurement means you can perform appropriate statistical analyses for generalisable results.

Examples of closed-ended questions for different variables

Nominal variables include categories that can’t be ranked, such as race or ethnicity. This includes binary or dichotomous categories.

It’s best to include categories that cover all possible answers and are mutually exclusive. There should be no overlap between response items.

In binary or dichotomous questions, you’ll give respondents only two options to choose from.

White Black or African American American Indian or Alaska Native Asian Native Hawaiian or Other Pacific Islander

Ordinal variables include categories that can be ranked. Consider how wide or narrow a range you’ll include in your response items, and their relevance to your respondents.

Likert-type questions collect ordinal data using rating scales with five or seven points.

When you have four or more Likert-type questions, you can treat the composite data as quantitative data on an interval scale . Intelligence tests, psychological scales, and personality inventories use multiple Likert-type questions to collect interval data.

With interval or ratio data, you can apply strong statistical hypothesis tests to address your research aims.

Pros and cons of closed-ended questions

Well-designed closed-ended questions are easy to understand and can be answered quickly. However, you might still miss important answers that are relevant to respondents. An incomplete set of response items may force some respondents to pick the closest alternative to their true answer. These types of questions may also miss out on valuable detail.

To solve these problems, you can make questions partially closed-ended, and include an open-ended option where respondents can fill in their own answer.

Open-ended questions

Open-ended, or long-form, questions allow respondents to give answers in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered. For example, respondents may want to answer ‘multiracial’ for the question on race rather than selecting from a restricted list.

  • How do you feel about open science?
  • How would you describe your personality?
  • In your opinion, what is the biggest obstacle to productivity in remote work?

Open-ended questions have a few downsides.

They require more time and effort from respondents, which may deter them from completing the questionnaire.

For researchers, understanding and summarising responses to these questions can take a lot of time and resources. You’ll need to develop a systematic coding scheme to categorise answers, and you may also need to involve other researchers in data analysis for high reliability .

Question wording can influence your respondents’ answers, especially if the language is unclear, ambiguous, or biased. Good questions need to be understood by all respondents in the same way ( reliable ) and measure exactly what you’re interested in ( valid ).

Use clear language

You should design questions with your target audience in mind. Consider their familiarity with your questionnaire topics and language and tailor your questions to them.

For readability and clarity, avoid jargon or overly complex language. Don’t use double negatives because they can be harder to understand.

Use balanced framing

Respondents often answer in different ways depending on the question framing. Positive frames are interpreted as more neutral than negative frames and may encourage more socially desirable answers.

Use a mix of both positive and negative frames to avoid bias , and ensure that your question wording is balanced wherever possible.

Unbalanced questions focus on only one side of an argument. Respondents may be less likely to oppose the question if it is framed in a particular direction. It’s best practice to provide a counterargument within the question as well.

Avoid leading questions

Leading questions guide respondents towards answering in specific ways, even if that’s not how they truly feel, by explicitly or implicitly providing them with extra information.

It’s best to keep your questions short and specific to your topic of interest.

  • The average daily work commute in the US takes 54.2 minutes and costs $29 per day. Since 2020, working from home has saved many employees time and money. Do you favour flexible work-from-home policies even after it’s safe to return to offices?
  • Experts agree that a well-balanced diet provides sufficient vitamins and minerals, and multivitamins and supplements are not necessary or effective. Do you agree or disagree that multivitamins are helpful for balanced nutrition?

Keep your questions focused

Ask about only one idea at a time and avoid double-barrelled questions. Double-barrelled questions ask about more than one item at a time, which can confuse respondents.

This question could be difficult to answer for respondents who feel strongly about the right to clean drinking water but not high-speed internet. They might only answer about the topic they feel passionate about or provide a neutral answer instead – but neither of these options capture their true answers.

Instead, you should ask two separate questions to gauge respondents’ opinions.

Strongly Agree Agree Undecided Disagree Strongly Disagree

Do you agree or disagree that the government should be responsible for providing high-speed internet to everyone?

You can organise the questions logically, with a clear progression from simple to complex. Alternatively, you can randomise the question order between respondents.

Logical flow

Using a logical flow to your question order means starting with simple questions, such as behavioural or opinion questions, and ending with more complex, sensitive, or controversial questions.

The question order that you use can significantly affect the responses by priming them in specific directions. Question order effects, or context effects, occur when earlier questions influence the responses to later questions, reducing the validity of your questionnaire.

While demographic questions are usually unaffected by order effects, questions about opinions and attitudes are more susceptible to them.

  • How knowledgeable are you about Joe Biden’s executive orders in his first 100 days?
  • Are you satisfied or dissatisfied with the way Joe Biden is managing the economy?
  • Do you approve or disapprove of the way Joe Biden is handling his job as president?

It’s important to minimise order effects because they can be a source of systematic error or bias in your study.

Randomisation

Randomisation involves presenting individual respondents with the same questionnaire but with different question orders.

When you use randomisation, order effects will be minimised in your dataset. But a randomised order may also make it harder for respondents to process your questionnaire. Some questions may need more cognitive effort, while others are easier to answer, so a random order could require more time or mental capacity for respondents to switch between questions.

Follow this step-by-step guide to design your questionnaire.

Step 1: Define your goals and objectives

The first step of designing a questionnaire is determining your aims.

  • What topics or experiences are you studying?
  • What specifically do you want to find out?
  • Is a self-report questionnaire an appropriate tool for investigating this topic?

Once you’ve specified your research aims, you can operationalise your variables of interest into questionnaire items. Operationalising concepts means turning them from abstract ideas into concrete measurements. Every question needs to address a defined need and have a clear purpose.

Step 2: Use questions that are suitable for your sample

Create appropriate questions by taking the perspective of your respondents. Consider their language proficiency and available time and energy when designing your questionnaire.

  • Are the respondents familiar with the language and terms used in your questions?
  • Would any of the questions insult, confuse, or embarrass them?
  • Do the response items for any closed-ended questions capture all possible answers?
  • Are the response items mutually exclusive?
  • Do the respondents have time to respond to open-ended questions?

Consider all possible options for responses to closed-ended questions. From a respondent’s perspective, a lack of response options reflecting their point of view or true answer may make them feel alienated or excluded. In turn, they’ll become disengaged or inattentive to the rest of the questionnaire.

Step 3: Decide on your questionnaire length and question order

Once you have your questions, make sure that the length and order of your questions are appropriate for your sample.

If respondents are not being incentivised or compensated, keep your questionnaire short and easy to answer. Otherwise, your sample may be biased with only highly motivated respondents completing the questionnaire.

Decide on your question order based on your aims and resources. Use a logical flow if your respondents have limited time or if you cannot randomise questions. Randomising questions helps you avoid bias, but it can take more complex statistical analysis to interpret your data.

Step 4: Pretest your questionnaire

When you have a complete list of questions, you’ll need to pretest it to make sure what you’re asking is always clear and unambiguous. Pretesting helps you catch any errors or points of confusion before performing your study.

Ask friends, classmates, or members of your target audience to complete your questionnaire using the same method you’ll use for your research. Find out if any questions were particularly difficult to answer or if the directions were unclear or inconsistent, and make changes as necessary.

If you have the resources, running a pilot study will help you test the validity and reliability of your questionnaire. A pilot study is a practice run of the full study, and it includes sampling, data collection , and analysis.

You can find out whether your procedures are unfeasible or susceptible to bias and make changes in time, but you can’t test a hypothesis with this type of study because it’s usually statistically underpowered .

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

You can organise the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomisation can minimise the bias from order effects.

Questionnaires can be self-administered or researcher-administered.

Researcher-administered questionnaires are interviews that take place by phone, in person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bhandari, P. (2022, October 10). Questionnaire Design | Methods, Question Types & Examples. Scribbr. Retrieved 22 April 2024, from https://www.scribbr.co.uk/research-methods/questionnaire-design/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, doing survey research | a step-by-step guide & examples, what is a likert scale | guide & examples, reliability vs validity in research | differences, types & examples.

IELTS Preparation with Liz: Free IELTS Tips and Lessons, 2024

' src=

  • Test Information FAQ
  • Band Scores
  • IELTS Candidate Success Tips
  • Computer IELTS: Pros & Cons
  • How to Prepare
  • Useful Links & Resources
  • Recommended Books
  • Writing Task 1
  • Writing Task 2
  • Speaking Part 1 Topics
  • Speaking Part 2 Topics
  • Speaking Part 3 Topics
  • 100 Essay Questions
  • On The Day Tips
  • Top Results
  • Advanced IELTS

100 IELTS Essay Questions

Below are practice IELTS essay questions and topics for writing task 2. The 100 essay questions have been used many times over the years. The questions are organised under common topics and essay types. IELTS often use the similar topics for their essays but change the wording of the essay question.

In order to prepare well for writing task 2, you should prepare ideas for common topics and then practise applying them to the tasks given (to the essay questions). Also see model essays and tips  for writing task 2.

Below you will find:

  • Essay Questions By Topic
  • Essay Questions by Essay Type

Please also note that my new Grammar E-book is now available in my store along with my Ideas for Essay Topics E-book and Advanced Writing Lessons. To visit store, click here: Liz’s Store

1) Common IELTS Essay Questions

IELTS practice essay questions divided by topic. These topics have been reported by IELTS students in their tests. Essay questions have been recreated as accurately as possible.

  • Art   (5 essay questions)
  • Business & Money   (17 essay questions)
  • Communication & Personality   (20 essay questions)
  • Crime & Punishment   (12 essay questions)
  • Education   (17 essay questions)
  • Environment   (12 essay questions)
  • Family & Children   (8 essay questions)
  • Food & Diet (13 essay questions)
  • Government (6 essay questions)
  • Health   (9 essay questions)
  • Housing, Buildings & Urban Planning (8 essay questions)
  • Language (6 essay questions)
  • Leisure (1 essay question)
  • Media & Advertising   (12 essay questions)
  • Reading  (5 essay questions)
  • Society   (10 essay questions)
  • Space Exploration (3 questions)
  • Sport & Exercise   (6 essay questions)
  • Technology  (6 essay questions)
  • Tourism and Travel   (11 essay questions)
  • Transport  (7 essay questions)
  • Work (17 essay questions)

2) IELTS Essay Questions by Essay Type 

There are 5 main types of essay questions in IELTS writing task 2 (opinion essays, discussion essay, advantage/disadvantage essays, solution essay and direct question essays). Click on the links below to see some sample essay questions for each type.

  • Opinion Essay Questions
  • Discussion Essay Questions
  • Solution Essay Questions
  • Direct Questions Essay Titles 
  • Advantage / Disadvantage Essay Questions

………………………………

FREE SUBSCRIBE : Get New Lessons & Posts by Email

Type your email…

Advanced IELTS Lessons & E-books

questionnaire essay writing

Recent Lessons

Ielts liz personal update 2024, ielts model essay -two questions essay type, ielts bar chart of age groups 2024, ielts topic: urban planning, ielts listening transcripts: when and how to use them, 2024 ielts speaking part 1 topics.

questionnaire essay writing

Click Below to Learn:

  • IELTS Test Information

Copyright Notice

Copyright © Elizabeth Ferguson, 2014 – 2024

All rights reserved.

Privacy Policy & Disclaimer

  • Click here:  Privacy Policy 
  • Click here: Disclaimer

Return to top of page

Copyright © 2024 · Prose on Genesis Framework · WordPress · Log in

Gig workers are writing essays for AI to learn from

  • Companies are hiring highly educated gig workers to write training content for AI models .
  • The shift toward more sophisticated trainers comes as tech giants scramble for new data sources.
  • AI could run out of data to learn from by 2026, one research institute has warned. 

Insider Today

As artificial intelligence models run out of data to train themselves on, AI companies are increasingly turning to actual humans to write training content.

For years, companies have used gig workers to help train AI models on simple tasks like photo identification , data annotation, and labelling. But the rapidly advancing technology now requires more advanced people to train it.

Companies such as Scale AI and Surge AI are hiring part-timers with graduate degrees to write essays and creative prompts for the bots to gobble up, The New York Times reported . Scale AI, for example, posted a job last year looking for people with Master's degrees or PhDs, who are fluent in either English, Hindi, or Japanese and have professional writing experience in fields like poetry, journalism, and publishing.

Related stories

Their mission? To help AI bots "become better writers," Scale AI wrote in the posting.

And an army of workers are needed to do this kind of work. Scale AI has as many as tens of thousands of contractors working on its platform at a time, per the Times.

"What really makes the A.I. useful to its users is the human layer of data, and that really needs to be done by smart humans and skilled humans and humans with a particular degree of expertise and a creative bent," Willow Primack, the vice president of data operations at Scale AI, told the New York Times. "We have been focusing on contractors, particularly within North America, as a result."

The shift toward more sophisticated gig trainers comes as tech giants scramble to find new data to train their technology on. That's because the programs learn so incredibly fast that they're already running out of available resources to learn from. The vast trove of online information — everything from scientific papers to news articles to Wikipedia pages — is drying up.

Epoch, an AI research institute, has warned that AI could run out of data by 2026.

So, companies are finding more and more creative ways to make sure their systems never stop learning. Google has considered accessing its customers' data in Google Docs , Sheets, and Slides while Meta even thought about buying publishing house Simon & Schuster to harvest its book collection, Business Insider previously reported.

Watch: Nearly 50,000 tech workers have been laid off — but there's a hack to avoid layoffs

questionnaire essay writing

  • Main content

QNS: Queens News and Community

Jamaica teen takes top prize in NYPD essay-writing contest

jamaica

Tina Perumal, 18, bested 300 teens vying for an award in the Police Athletic League-NYPD annual essay competition. She snagged the prestigious NYPD Commander Award, one of a handful of NYPD awards for essay writing.  The contest was open to all New York City students in grades 9-12.

Perumal was honored on Tuesday during the Police Commissioner for a Day Ceremony. Her essay was noteworthy since it was based on the concept of her being appointed police commissioner, where she used the high-ranking position to bring together a task force to combat child neglect.

Perumal said the essay was inspired by a close friend who went through a traumatic childhood experience, which prompted her to focus on the topic. “My essay was basically helping to bring back child safety and bring attention to the child safety that doesn’t really get looked at or recognized,” she said. 

Perumal, a senior at Martin Van Buren High School , plans to attend John Jay College of Criminal Justice in the fall. She grew up watching shows such as “Law and Order” and other crime series, and now plans to be a police officer. “That’s where my mind and my heart has always been set on,” she said. “I just also want to help everybody around me and just protect everyone.”

When she is not spending time writing award-winning essays, Perumal works as an administrative intern at Life Camp’s Creative Arts Lab . She has been working with the organization since 2022 and oversees middle and elementary school students in the program. Perumal said her time working at Life Camp has inspired her to expand her own world view.

“They provide a lot of opportunities for you to get yourself out there. To be something you never saw yourself becoming or changing yourself that you never saw yourself changing to be,” she said. Currently, Perumal is working with staff on raising funds for a trip to Six Flags so they can celebrate the work they have done at the Creative Arts Lab over the past two years.

Life Camp’s mission is to provide youth and families in Southeast Queens impacted by gun violence with tools to stay in school and out of the criminal justice system. The organization offers programs to affected youngsters and their families as part of the initiative.

The Creative Arts Lab is a program sponsored by Life Camp targeting youth aged 12-24. The lab offers educational programs focused on arts education including DJing, composition and recording, theater, dance, painting and an array of other artistic mediums.

About the Author

Jobs in new york, add your job.

  • The 13th Child Behavior Analyst, P.C. ABA Therapist
  • A&J Fire Extinguisher Fire Safety Technician
  • NY ELITE CANNABIS- NYS LEGAL ADULT USE DISPENSARY Sales Personnel/ BAYSIDE

View all jobs…

Things to do in queens.

Post an Event

For the past 12 years, this collaborativ

Town of Islip, KIC, South Shore University Hospital & the Islip Chamber of Commerce Beautification Committee Clean-Up

During spring recess, plan a MoMI Animat

Get Animated with Elemental Museum of the Moving Image

Los animales se han reunido en una asamb

ACCIÓN POR EL CLIMA: LA CONVENCIÓN DE ANIMALES DE GALÁPAGOS Steinway Library

SPRING BREAK-Learn and have fun. Introdu

Cooking For Kids 41-70 Main Street

Bayside Historical Society is turning 60

Swing Dancing at The Castle in Fort Totten Bayside Historical Society

Get ready for music, sidewalk sales, bon

Sayville Spring Fest 2024

View All Events…

Latest News

priest

Get Queens in your inbox

Dining & nightlife.

destinations

Entertainment

queensboro dance festival

Police & Fire

coffee

Related Articles

priest

More from Around New York

Matt Rempe Rangers

Rangers’ Matt Rempe proving early that he’s built for playoffs

Washington, DC boasts the highest proportion of same-sex couples in the country but also ranks high for risk of flooding, heat waves, and strong winds, according to the Williams Institute at UCLA School of Law.

Same-sex couples face greater climate change risks, research shows

Shoshanna-McCollum-Banner

Dan Rattiner speaks with Shoshanna McCollum, the editor in chief of the Fire Island News – Episode 180

Expert Tips on Navigating Seasonal Allergies

Expert Tips on Navigating Seasonal Allergies

Queens’ job board.

  • Share full article

Advertisement

Supported by

Guest Essay

What Sentencing Could Look Like if Trump Is Found Guilty

A black-and-white photo of Donald Trump, standing behind a metal barricade.

By Norman L. Eisen

Mr. Eisen is the author of “Trying Trump: A Guide to His First Election Interference Criminal Trial.”

For all the attention to and debate over the unfolding trial of Donald Trump in Manhattan, there has been surprisingly little of it paid to a key element: its possible outcome and, specifically, the prospect that a former and potentially future president could be sentenced to prison time.

The case — brought by Alvin Bragg, the Manhattan district attorney, against Mr. Trump — represents the first time in our nation’s history that a former president is a defendant in a criminal trial. As such, it has generated lots of debate about the case’s legal strength and integrity, as well as its potential impact on Mr. Trump’s efforts to win back the White House.

A review of thousands of cases in New York that charged the same felony suggests something striking: If Mr. Trump is found guilty, incarceration is an actual possibility. It’s not certain, of course, but it is plausible.

Jury selection has begun, and it’s not too soon to talk about what the possibility of a sentence, including a prison sentence, would look like for Mr. Trump, for the election and for the country — including what would happen if he is re-elected.

The case focuses on alleged interference in the 2016 election, which consisted of a hush-money payment Michael Cohen, the former president’s fixer at the time, made in 2016 to a porn star, Stormy Daniels, who said she had an affair with Mr. Trump. Mr. Bragg is arguing that the cover-up cheated voters of the chance to fully assess Mr. Trump’s candidacy.

This may be the first criminal trial of a former president in American history, but if convicted, Mr. Trump’s fate is likely to be determined by the same core factors that guide the sentencing of every criminal defendant in New York State Court.

Comparable cases. The first factor is the base line against which judges measure all sentences: how other defendants have been treated for similar offenses. My research encompassed almost 10,000 cases of felony falsifying business records that have been prosecuted across the state of New York since 2015. Over a similar period, the Manhattan D.A. has charged over 400 of these cases . In roughly the first year of Mr. Bragg’s tenure, his team alone filed 166 felony counts for falsifying business records against 34 people or companies.

Contrary to claims that there will be no sentence of incarceration for falsifying business records, when a felony conviction involves serious misconduct, defendants can be sentenced to some prison time. My analysis of the most recent data indicates that approximately one in 10 cases in which the most serious charge at arraignment is falsifying business records in the first degree and in which the court ultimately imposes a sentence, results in a term of imprisonment.

To be clear, these cases generally differ from Mr. Trump’s case in one important respect: They typically involve additional charges besides just falsifying records. That clearly complicates what we might expect if Mr. Trump is convicted.

Nevertheless, there are many previous cases involving falsifying business records along with other charges where the conduct was less serious than is alleged against Mr. Trump and prison time was imposed. For instance, Richard Luthmann was accused of attempting to deceive voters — in his case, impersonating New York political figures on social media in an attempt to influence campaigns. He pleaded guilty to three counts of falsifying business records in the first degree (as well as to other charges). He received a sentence of incarceration on the felony falsification counts (although the sentence was not solely attributable to the plea).

A defendant in another case was accused of stealing in excess of $50,000 from her employer and, like in this case, falsifying one or more invoices as part of the scheme. She was indicted on a single grand larceny charge and ultimately pleaded guilty to one felony count of business record falsification for a false invoice of just under $10,000. She received 364 days in prison.

To be sure, for a typical first-time offender charged only with run-of-the-mill business record falsification, a prison sentence would be unlikely. On the other hand, Mr. Trump is being prosecuted for 34 counts of conduct that might have changed the course of American history.

Seriousness of the crime. Mr. Bragg alleges that Mr. Trump concealed critical information from voters (paying hush money to suppress an extramarital relationship) that could have harmed his campaign, particularly if it came to light after the revelation of another scandal — the “Access Hollywood” tape . If proved, that could be seen not just as unfortunate personal judgment but also, as Justice Juan Merchan has described it, an attempt “to unlawfully influence the 2016 presidential election.”

History and character. To date, Mr. Trump has been unrepentant about the events alleged in this case. There is every reason to believe that will not change even if he is convicted, and lack of remorse is a negative at sentencing. Justice Merchan’s evaluation of Mr. Trump’s history and character may also be informed by the other judgments against him, including Justice Arthur Engoron’s ruling that Mr. Trump engaged in repeated and persistent business fraud, a jury finding that he sexually abused and defamed E. Jean Carroll and a related defamation verdict by a second jury.

Justice Merchan may also weigh the fact that Mr. Trump has been repeatedly held in contempt , warned , fined and gagged by state and federal judges. That includes for statements he made that exposed witnesses, individuals in the judicial system and their families to danger. More recently, Mr. Trump made personal attacks on Justice Merchan’s daughter, resulting in an extension of the gag order in the case. He now stands accused of violating it again by commenting on witnesses.

What this all suggests is that a term of imprisonment for Mr. Trump, while far from certain for a former president, is not off the table. If he receives a sentence of incarceration, perhaps the likeliest term is six months, although he could face up to four years, particularly if Mr. Trump chooses to testify, as he said he intends to do , and the judge believes he lied on the stand . Probation is also available, as are more flexible approaches like a sentence of spending every weekend in jail for a year.

We will probably know what the judge will do within 30 to 60 days of the end of the trial, which could run into mid-June. If there is a conviction, that would mean a late summer or early fall sentencing.

Justice Merchan would have to wrestle in the middle of an election year with the potential impact of sentencing a former president and current candidate.

If Mr. Trump is sentenced to a period of incarceration, the reaction of the American public will probably be as polarized as our divided electorate itself. Yet as some polls suggest — with the caveat that we should always be cautious of polls early in the race posing hypothetical questions — many key swing state voters said they would not vote for a felon.

If Mr. Trump is convicted and then loses the presidential election, he will probably be granted bail, pending an appeal, which will take about a year. That means if any appeals are unsuccessful, he will most likely have to serve any sentence starting sometime next year. He will be sequestered with his Secret Service protection; if it is less than a year, probably in Rikers Island. His protective detail will probably be his main company, since Mr. Trump will surely be isolated from other inmates for his safety.

If Mr. Trump wins the presidential election, he can’t pardon himself because it is a state case. He will be likely to order the Justice Department to challenge his sentence, and department opinions have concluded that a sitting president could not be imprisoned, since that would prevent the president from fulfilling the constitutional duties of the office. The courts have never had to address the question, but they could well agree with the Justice Department.

So if Mr. Trump is convicted and sentenced to a period of incarceration, its ultimate significance is probably this: When the American people go to the polls in November, they will be voting on whether Mr. Trump should be held accountable for his original election interference.

What questions do you have about Trump’s Manhattan criminal trial so far?

Please submit them below. Our trial experts will respond to a selection of readers in a future piece.

Norman L. Eisen investigated the 2016 voter deception allegations as counsel for the first impeachment and trial of Donald Trump and is the author of “Trying Trump: A Guide to His First Election Interference Criminal Trial.”

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips . And here’s our email: [email protected] .

Follow the New York Times Opinion section on Facebook , Instagram , TikTok , WhatsApp , X and Threads .

The Trump Trial’s Extraordinary Opening

The first days of the criminal case against the former president have been mundane, even boring—and that’s remarkable.

Trump staring at the camera

This is The Trump Trials by George T. Conway III, a newsletter that chronicles the former president’s legal troubles. Sign up here .

The defendant nodded off a couple of times on Monday. And I have to confess, as a spectator in an overflow courtroom watching on closed-circuit television, so did I.

Legal proceedings can be like that. Mundane, even boring. That’s how the first couple of days of the trial in The People of the State of New York v. Donald J. Trump , Indictment No. 71543–2023, felt much of the time. Ordinary—despite being so extraordinary. And, frankly, that was comforting. The ordinary mechanics of the criminal-litigation process were applied fairly, efficiently, and methodically to a defendant of unparalleled notoriety, one who has devoted himself to undermining the rule of law.

Certainly the setting was ordinary. When the Criminal Courts Building, at 100 Centre Street in Lower Manhattan, first opened in 1941, an architectural critic lamented that the Art Deco structure, a New Deal/Public Works Administration project, was “ uncommunicative .” Eight decades later, it still has little to say. Raw and spartan, it’s a bit of a mystery to people who aren’t familiar with it (including me, a civil litigator who, despite having been admitted to the New York state bar some 35 years ago, practiced mostly in federal and Delaware courts). A pool reporter yesterday described the surroundings as “drab.”

Drab indeed, but busy—very busy. There’s never a want of bustle here, of the sort you would expect. As the former federal prosecutor Andrew Weissmann put it this week, 100 Centre is, “well, Dickensian—a beehive of activity with miscreants, state prosecutors, judges, defense lawyers, probation officers, court security [and] families—in dark, dingy halls and courtrooms.” It’s a bit like New York City as a whole: How it functions, with the volume it handles, never ceases to amaze.

And how the court manages to keep track of things, Lord only knows. In contrast with the federal courts or even New York’s civil courts, it has no electronic, publicly accessible docket. The Supreme Court of the State of New York for the County on New York, Criminal Term, is, as one courthouse reporter said last month, “stuck in the past.” It’s a tribunal “where the official record is a disorganized and incomplete mass of paper with no accounting of what’s inside.” The records come in brown accordion folders—Redwelds, lawyers call them—and what judges and clerks decide to put in them is the record, and what they don’t is not.

But somehow it works. Somehow the court manages to dispose of thousands of cases a year, involving all manner of defendants and offenses. A calendar emailed to journalists by the Manhattan District Attorney’s Office listing the week’s anticipated court appearances gives you the flavor. It catalogs names seemingly of many ethnicities, with a couple of corporate entities to boot. A hodgepodge of alleged charges, including the violent and the corrupt: robbery, conspiracy, forgery, criminal mischief, identity theft, enterprise corruption, stalking, murder, attempted murder, sex trafficking, grand larceny, attempted grand larceny, possession of a forged instrument, offering a false statement for filing.

And the list contained three cases involving the crime of falsifying business records, one of which was set for trial on Monday, April 15, in Part 59, Courtroom 1530— People v. Trump .

Nothing on the calendar, other than the defendant’s readily recognizable name, would have told you there was anything special about the case. In that sense, it was ordinary. But the hubbub outside—a handful of protesters, multiple television cameras, and a long line for the press and other spectators—made clear that something somewhat special was afoot. An overflow courtroom down the hall from the main courtroom offered a closed-circuit television feed of the proceedings. Those who had lined up went through an extra set of security screeners and machines—mandated, we were told, by the United States Secret Service.

But still, so much was ordinary—the stuff of the commencement of a criminal trial, housekeeping of the sort you’d see in virtually any court about to try a criminal case. That began promptly at 10 a.m. on Monday, when Judge Juan Merchan assumed the bench. There were loose ends for the judge to tie up, pending motions to decide. Merchan denied the defendant’s motion to recuse, reading, in even tones, an opinion from the bench. The motion was frivolous; the result unsurprising. And then the parties argued some motions in limine—pretrial efforts to exclude evidence.

For example, would the notorious Access Hollywood tape that rocked the 2016 presidential campaign be played for the jury?  The prosecution said it should be: An assistant district attorney said the tape would elucidate why the defendant and his campaign were so hell-bent, to the point of falsifying business records, on keeping additional instances of the defendant’s miscreant conduct with women out of the public eye. The defense, of course, argued that playing the tape would be prejudicial. After all, this wasn’t a case about sexual assault.

The judge allowed that the tape’s existence provided context for the business-records charges but ruled that actually showing the tape to the jury would be prejudicial. Instead, the jury would be given a transcript. And speaking of sexual assault, prosecutors tried to get in an excerpt from Trump’s deposition in the E. Jean Carroll sexual-assault and defamation cases in which Trump testified that he was a “star,” and that stars historically get to do to women what Trump said on the Access Hollywood tape that he liked to do to them. Judge Merchan rightly said no, he would not allow the jury to hear that. It would be too much, too beside the point of what this case (unlike the Carroll cases) is actually about.

But as unusual and colorful as the factual predicate for the evidentiary motions was, the argument wasn’t all that interesting. It was rather low-key, in fact. Perhaps that was because none of the proffered evidence was new. But it was also because the arguing of pretrial evidentiary motions, however crucial they may be (although these, frankly, weren’t), is seldom scintillating. I can’t imagine that Donald Trump and I were the only ones watching who dozed off.

Then came jury selection, which took the rest of Monday, all of yesterday, and will probably consume tomorrow and Friday as well. (The judge will be handling his other cases today.) That was a bit more interesting, but slow going at first. Again, the ordinary met the extraordinary. Ninety-six potential jurors were brought in. The judge provided an overview of the case in the broadest terms, describing the charges in a few sentences; explained what his role and what the jury’s would be; and read the names of the cast of characters (some would be witnesses, others would simply be mentioned, including—full disclosure—my ex-wife). Still, it was mundane. It was pretty much what a judge would say in any big case.

And jury selection was a bit tedious; in a case like this, it simply has to be. Jurors were asked to give oral answers—some 42 of them, including a number with multiple subparts—to a written questionnaire. In substance: Where do you live? What do you do? What’s your educational background? What news sources do you read? What’s your experience with the legal system? Have you ever been to a Trump rally or followed him on social media? Have you belonged to any anti-Trump groups? And on and on and and on. But the most important inquiries came toward the end of the list: questions asking whether the prospective jurors could be fair. Occasionally the judge would interject, when an unusual or unclear answer was given. And once in a while there was a moment of levity: One woman—in response to a question about having relatives or close friends in the legal field—noted that she had once dated a lawyer. “It ended fine,” she volunteered, with a flatness of tone that betrayed no hint of nostalgia or loss.

This process took well over a day, and included brief follow-up questioning—“voir dire”—by the lawyers for both sides. But the judge did take a shortcut, one that saved a great deal of effort: After describing the case, but before proceeding to the individual-by-individual, question-by-question process, he asked the entire group the bottom-line question: Do any of you think you couldn’t judge the case fairly? Roughly two-thirds of this first batch of potential jurors said they couldn’t. That was extraordinary—a reflection of the fact that everyone knows who the defendant is, and that not many people lack a strong opinion about him.

And during the lawyers’ voir dire, a few interesting moments did occur, mostly when Trump’s lawyers pulled out social-media posts that they claimed showed possible bias on the part of the remaining candidates in the jury pool. One man was stricken by the court for cause because he once posted that Trump should be locked up.  The Trump lawyers attempted, but failed, to get the court to strike a woman whose husband had posted some joking commentary about the former president. The judge’s response: That’s all you have? He allowed the juror to stay, and left it to counsel to decide whether to use their limited number of peremptory strikes.

In the end, for two days, the extraordinary intertwined with the ordinary, as it should in a case like this one. As one young woman from the Upper East Side, now to be known as Juror No. 2,  put it during the selection process, “No one is above the law.” Let’s hope that sentiment prevails.

  • Skip to main content
  • Keyboard shortcuts for audio player

NPR editor Uri Berliner resigns with blast at new CEO

David Folkenflik 2018 square

David Folkenflik

questionnaire essay writing

Uri Berliner resigned from NPR on Wednesday saying he could not work under the new CEO Katherine Maher. He cautioned that he did not support calls to defund NPR. Uri Berliner hide caption

Uri Berliner resigned from NPR on Wednesday saying he could not work under the new CEO Katherine Maher. He cautioned that he did not support calls to defund NPR.

NPR senior business editor Uri Berliner resigned this morning, citing the response of the network's chief executive to his outside essay accusing NPR of losing the public's trust.

"I am resigning from NPR, a great American institution where I have worked for 25 years," Berliner wrote in an email to CEO Katherine Maher. "I respect the integrity of my colleagues and wish for NPR to thrive and do important journalism. But I cannot work in a newsroom where I am disparaged by a new CEO whose divisive views confirm the very problems at NPR I cite in my Free Press essay."

NPR and Maher declined to comment on his resignation.

The Free Press, an online site embraced by journalists who believe that the mainstream media has become too liberal, published Berliner's piece last Tuesday. In it, he argued that NPR's coverage has increasingly reflected a rigid progressive ideology. And he argued that the network's quest for greater diversity in its workforce — a priority under prior chief executive John Lansing – has not been accompanied by a diversity of viewpoints presented in NPR shows, podcasts or online coverage.

Later that same day, NPR pushed back against Berliner's critique.

"We're proud to stand behind the exceptional work that our desks and shows do to cover a wide range of challenging stories," NPR's chief news executive, Edith Chapin, wrote in a memo to staff . "We believe that inclusion — among our staff, with our sourcing, and in our overall coverage — is critical to telling the nuanced stories of this country and our world."

Yet Berliner's commentary has been embraced by conservative and partisan Republican critics of the network, including former President Donald Trump and the activist Christopher Rufo.

Rufo is posting a parade of old social media posts from Maher, who took over NPR last month. In two examples, she called Trump a racist and also seemed to minimize the effects of rioting in 2020. Rufo is using those to rally public pressure for Maher's ouster, as he did for former Harvard University President Claudine Gay .

Others have used the moment to call for the elimination of federal funding for NPR – less than one percent of its roughly $300 million annual budget – and local public radio stations, which derive more of their funding from the government.

NPR names tech executive Katherine Maher to lead in turbulent era

NPR names tech executive Katherine Maher to lead in turbulent era

Berliner reiterated in his resignation letter that he does not support such calls.

In a brief interview, he condemned a statement Maher issued Friday in which she suggested that he had questioned "whether our people are serving our mission with integrity, based on little more than the recognition of their identity." She called that "profoundly disrespectful, hurtful, and demeaning."

Berliner subsequently exchanged emails with Maher, but she did not address those comments.

"It's been building up," Berliner said of his decision to resign, "and it became clear it was on today."

For publishing his essay in The Free Press and appearing on its podcast, NPR had suspended Berliner for five days without pay. Its formal rebuke noted he had done work outside NPR without its permission, as is required, and shared proprietary information.

(Disclosure: Like Berliner, I am part of NPR's Business Desk. He has edited many of my past stories. But he did not see any version of this article or participate in its preparation before it was posted publicly.)

Earlier in the day, Berliner forwarded to NPR editors and other colleagues a note saying he had "never questioned" their integrity and had been trying to raise these issues within the newsroom for more than seven years.

What followed was an email he had sent to newsroom leaders after Trump's 2016 win. He wrote then: "Primarily for the sake of our journalism, we can't align ourselves with a tribe. So we don't exist in a cocoon that blinds us to the views and experience of tens of millions of our fellow citizens."

Berliner's critique has inspired anger and dismay within the network. Some colleagues said they could no longer trust him after he chose to publicize such concerns rather than pursue them as part of ongoing newsroom debates, as is customary. Many signed a letter to Maher and Edith Chapin, NPR's chief news executive. They asked for clarity on, among other things, how Berliner's essay and the resulting public controversy would affect news coverage.

Yet some colleagues privately said Berliner's critique carried some truth. Chapin also announced monthly reviews of the network's coverage for fairness and diversity - including diversity of viewpoint.

She said in a text message earlier this week that that initiative had been discussed long before Berliner's essay, but "Now seemed [the] time to deliver if we were going to do it."

She added, "Healthy discussion is something we need more of."

Disclosure: This story was reported and written by NPR Media Correspondent David Folkenflik and edited by Deputy Business Editor Emily Kopp and Managing Editor Gerry Holmes. Under NPR's protocol for reporting on itself, no NPR corporate official or news executive reviewed this story before it was posted publicly.

  • Katherine Maher
  • uri berliner

AI Writer : Write Email, Essay 4+

Ai essay writer: writing tools, appzibrain infotech llp, designed for ipad.

  • Offers In-App Purchases

Screenshots

Description.

Powered by the cutting-edge AI technology, AI Writer generates high-quality content that is tailored to your needs in just seconds. Whether you need to craft a persuasive essay or a professional email, AI Writer has got you covered. Whether you're stuck on an assignment or just need some extra help getting your thoughts down on paper, AI Writer is for you! We understand that writing essays can be a tedious and time-consuming task, especially when you're struggling to come up with ideas or simply don't have the time to write. That's why we've created AI Writer, to help you write smarter, not harder. Our app analyzes your topic and generates a comprehensive essay tailored to your specific needs, saving you time and effort. The process is simple: all you have to do is input your topic, select the type of essay you need, and let our AI technology do the rest. Our AI algorithms will analyze your topic and generate a comprehensive essay that is tailored to your specific needs. 【Writing Features】 - Articles and Outlines: Intelligently generates articles and their outlines, assisting your writing projects. - Creative Writing: Includes compositions, stories, jokes, prose, novels, poetry, fables, scripts, etc. - Academic Research: Papers, experimental reports, research reports, literature reviews, academic book reviews, etc. - Professional Needs: Includes diaries, summaries, reading notes, weekly reports, work plans, personal growth plans, etc. - Multimedia Content: Video scripts, movie scripts, TV drama scripts, podcast scripts, animation scripts, etc. - Business Writing: Trending headlines, news reports, product descriptions, advertising copy, social media content, etc. - Personal Purposes: Travel logs, reflections, lyrics, brand stories, personal records, and thoughts, etc. For more information : Privacy Policy : https://appzibraininfotech.blogspot.com/2024/03/privacy-policy.html Terms of Use : https://appzibraininfotech.blogspot.com/2024/03/terms-of-use.html

App Privacy

The developer, AppziBrain Infotech LLP , indicated that the app’s privacy practices may include handling of data as described below. For more information, see the developer’s privacy policy .

Data Used to Track You

The following data may be used to track you across apps and websites owned by other companies:

  • Identifiers

Data Not Linked to You

The following data may be collected but it is not linked to your identity:

  • Diagnostics

Privacy practices may vary, for example, based on the features you use or your age. Learn More

Information

  • writing generator Three Month $19.99
  • writing generator One Month $9.99
  • writing generator One Week $5.99
  • Developer Website
  • App Support
  • Privacy Policy

More By This Developer

Video AI Art Generator - Maker

IMAGES

  1. 30+ Questionnaire Templates (Word) ᐅ TemplateLab

    questionnaire essay writing

  2. Research Paper Sample Survey Questionnaire For Thesis

    questionnaire essay writing

  3. Questionnaire Sample For Research Paper PDF

    questionnaire essay writing

  4. Belbin Questionnaire Free Essay Example

    questionnaire essay writing

  5. 30+ Questionnaire Templates (Word)

    questionnaire essay writing

  6. Writing Skills Survey Questionnaire

    questionnaire essay writing

VIDEO

  1. Questionnaire Design (Academic writing

  2. Opinion Essay/IELTS Writing Task 2/ IELTS Academic/ Essay Structure/ Essay Templates

  3. What is Questionnaire?Types of Questionnaire in Research .#Research methodology notes

  4. How to Make a Questionnaire in Word

  5. DISSERTATION HELP: How to input your data

  6. Designing a Questionnair || How to design a questionnaire || Step by step Guide

COMMENTS

  1. PDF ESLP 82 Questionnaire: Self-Assessment of English Writing Skills and

    ESLP 82 Questionnaire: Self-Assessment of English Writing Skills and Use of Writing Strategies Please rate your abilities for each item below a scale between 1 to 5.

  2. Questionnaire Design

    Questionnaires vs. surveys. A survey is a research method where you collect and analyze data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.. Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.

  3. PDF Strategies for Essay Writing

    Harvard College Writing Center 5 Asking Analytical Questions When you write an essay for a course you are taking, you are being asked not only to create a product (the essay) but, more importantly, to go through a process of thinking more deeply about a question or problem related to the course. By writing about a

  4. Questionnaire

    A Questionnaire is a research tool or survey instrument that consists of a set of questions or prompts designed to gather information from individuals or groups of people. It is a standardized way of collecting data from a large number of people by asking them a series of questions related to a specific topic or research objective.

  5. PDF PREPARING EFFECTIVE ESSAY QUESTIONS

    This workbook is the first in a series of three workbooks designed to improve the. development and use of effective essay questions. It focuses on the writing and use of. essay questions. The second booklet in the series focuses on scoring student responses to. essay questions.

  6. Writing Survey Questions

    Writing Survey Questions. Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions.

  7. Asking Analytical Questions

    When you write an essay for a course you are taking, you are being asked not only to create a product (the essay) but, more importantly, to go through a process of thinking more deeply about a question or problem related to the course. ... Your answer to that question will be your essay's thesis. You may have many questions as you consider a ...

  8. The Beginner's Guide to Writing an Essay

    The essay writing process consists of three main stages: Preparation: Decide on your topic, do your research, and create an essay outline. Writing: Set out your argument in the introduction, develop it with evidence in the main body, and wrap it up with a conclusion. Revision: Check your essay on the content, organization, grammar, spelling ...

  9. Analysing questions

    Explain why a knowledge of a learning theory was or would have been useful in the circumstances. Instructions words = explain (twice); reflect on. Subjects = two learning theories; an experience from your teaching practice; knowledge of a learning theory. Think of each criterion therefore as a mini essay.

  10. Example of a Great Essay

    An essay is a focused piece of writing that explains, argues, describes, or narrates. In high school, you may have to write many different types of essays to develop your writing skills. Academic essays at college level are usually argumentative : you develop a clear thesis about your topic and make a case for your position using evidence ...

  11. Questionnaire: Definition, How to Design, Types & Examples

    From writing the questions to designing the survey flow, the respondent's point of view should always be front and center in your mind during a questionnaire design. 2. How to write survey questions. Your questionnaire should only be as long as it needs to be, and every question needs to deliver value. That means your questions must each have ...

  12. Questionnaire on strategies for argumentative essay writing

    This test can help improve assessment and intervention in writing argumentative essays in college. Items with greater level of difficulty should be constructed and another field study should be ...

  13. Over 170 Prompts to Inspire Writing and Discussion

    During the 2020-21 school year, we asked 176 questions, and you can find them all below or here as a PDF. The questions are divided into two categories — those that provide opportunities for ...

  14. Frontiers

    The advantages of exploring these aspects of the Writing Strategies Questionnaire would be the possibility of capturing students' strategy preferences non ... M., Thomas, G. V., and Robinson, E. J. (2000). Individual differences in undergraduate essay writing strategies. A longitudinal study. High. Educ. 39, 181-200. doi: 10.1023/A ...

  15. PDF Designing a Questionnaire for a Research Paper: A Comprehensive Guide

    writing questions and building the construct of the questionnaire. It also develops the demand to pre-test the questionnaire and finalizing the questionnaire to conduct the survey. Keywords: Questionnaire, Academic Survey, Questionnaire Design, Research Methodology I. INTRODUCTION A questionnaire, as heart of the survey is based on a set of

  16. English Writing Instruction Questionnaire: The development of a

    Based on genre-pedagogy and Bandura's theory of self-efficacy, a questionnaire consisting of two parts has been developed, one called English Writing Instruction Teaching (EWIT) and one called ...

  17. Questionnaire Design

    Questionnaires vs surveys. A survey is a research method where you collect and analyse data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.. Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.

  18. Analyse, Explain, Identify… 22 essay question words

    Since 2006, Oxbridge Essays has been the UK's leading paid essay-writing and dissertation service. We have helped 10,000s of undergraduate, Masters and PhD students to maximise their grades in essays, dissertations, model-exam answers, applications and other materials.

  19. PDF Investigating writing difficulties in essay writing: Tertiary ...

    data were obtained from the web-based questionnaire and semi-structured interview, then analyzed separately. Twenty-one undergraduate students ... Consequently, the essay writing course becomes a notable subject for students at the tertiary level. In the Indonesian context, higher education (HE) curriculum is highly required the ...

  20. 100 IELTS Essay Questions

    100 IELTS Essay Questions. Below are practice IELTS essay questions and topics for writing task 2. The 100 essay questions have been used many times over the years. The questions are organised under common topics and essay types. IELTS often use the similar topics for their essays but change the wording of the essay question.

  21. Our 15th Annual Summer Reading Contest

    Our 15th Annual Summer Reading Contest. Students are invited to tell us what they're reading in The Times and why, this year in writing OR via a 90-second video. Contest dates: June 7 to Aug. 16. +.

  22. NPR Editor Uri Berliner suspended after essay criticizing network : NPR

    NPR suspended senior editor Uri Berliner for five days without pay after he wrote an essay accusing the network of losing the public's trust and appeared on a podcast to explain his argument. Uri ...

  23. Scribbr

    Get expert help from Scribbr's academic editors, who will proofread and edit your essay, paper, or dissertation to perfection. Proofreading Services. ... You're not alone. Together with our team and highly qualified editors, we help you answer all your questions about academic writing. Open 24/7 - 365 days a year. Always available to help ...

  24. Gig workers are writing essays for AI to learn from

    Companies are hiring highly educated gig workers to write training content for AI models. The shift toward more sophisticated trainers comes as tech giants scramble for new data sources. AI could ...

  25. Jamaica teen takes top prize in NYPD essay-writing contest

    A Jamaica teen was honored by the NYPD at One Police Plaza this week for her essay-writing ability. Tina Perumal, 18, bested 300 teens vying for an award in the Police Athletic League-NYPD annual ...

  26. What Sentencing Could Look Like if Trump Is Found Guilty

    Bragg is arguing that the cover-up cheated voters of the chance to fully assess Mr. Trump's candidacy. This may be the first criminal trial of a former president in American history, but if ...

  27. The Trump Trial's Extraordinary Opening

    That's how the first couple of days of the trial in the People of the State of New York v. Donald J. Trump, Indictment No. 71543-2023, felt much of the time. Ordinary—despite being so ...

  28. NPR editor Uri Berliner resigns with blast at new CEO

    Uri Berliner resigned from NPR on Wednesday saying he could not work under the new CEO Katherine Maher. He cautioned that he did not support calls to defund NPR. NPR senior business editor Uri ...

  29. PDF ESLP 182 Questionnaire: Self-Assessment of English Writing and Grammar

    Self-Assessment of English Writing and Grammar, Punctuation, & Mechanics Skills and Use of Writing & Editing Strategies Please rate your abilities for each item below a scale between 1 to 5.

  30. ‎AI Writer : Write Email, Essay on the App Store

    The process is simple: all you have to do is input your topic, select the type of essay you need, and let our AI technology do the rest. Our AI algorithms will analyze your topic and generate a comprehensive essay that is tailored to your specific needs. 【Writing Features】. - Articles and Outlines: Intelligently generates articles and their ...