What Is the SAT Essay?

College Board

  • February 28, 2024

The SAT Essay section is a lot like a typical writing assignment in which you’re asked to read and analyze a passage and then produce an essay in response to a single prompt about that passage. It gives you the opportunity to demonstrate your reading, analysis, and writing skills—which are critical to readiness for success in college and career—and the scores you’ll get back will give you insight into your strengths in these areas as well as indications of any areas that you may still need to work on.

The Essay section is only available in certain states where it’s required as part of SAT School Day administrations. If you’re going to be taking the SAT during school , ask your counselor if it will include the Essay section. If it’s included, the Essay section will come after the Reading and Writing and Math sections and will add an additional 50 minutes .

What You’ll Do

  • Read a passage between 650 and 750 words in length.
  • Explain how the author builds an argument to persuade an audience.
  • Support your explanation with evidence from the passage.

You won’t be asked to agree or disagree with a position on a topic or to write about your personal experience.

The Essay section shows how well you understand the passage and are able to use it as the basis for a well-written, thought-out discussion. Your score will be based on three categories.

Reading: A successful essay shows that you understood the passage, including the interplay of central ideas and important details. It also shows an effective use of textual evidence.

Analysis: A successful essay shows your understanding of how the author builds an argument by:

  • Examining the author’s use of evidence, reasoning, and other stylistic and persuasive techniques
  • Supporting and developing claims with well-chosen evidence from the passage

Writing: A successful essay is focused, organized, and precise, with an appropriate style and tone that varies sentence structure and follows the conventions of standard written English.

Learn more about how the SAT Essay is scored.

Want to practice? Log in to the Bluebook™ testing application , go to the Practice and Prepare section, and choose full-length practice test . There are 3 practice Essay   tests. Once you submit your response, go to MyPractice.Collegeboard.org , where you’ll see your essay, a scoring guide and rubric so that you can score yourself, and student samples for various scores to compare your self-score with a student at the same level.

After the Test

You’ll get your Essay score the same way you’ll get your scores for the Reading and Writing and Math sections. If you choose to send your SAT scores to colleges, your Essay score will be reported along with your other section scores from that test day. Even though Score Choice™   allows you to choose which day’s scores you send to colleges, you can never send only some scores from a certain test day. For instance, you can’t choose to send Math scores but not SAT Essay scores.

Until 2021, the SAT Essay was also an optional section when taking the SAT on a weekend. That section was discontinued in 2021.

If you don’t have the opportunity to take the SAT Essay section as part of the SAT, don’t worry. There are other ways to show your writing skills as part of the work you’re already doing on your path to college. The SAT can help you stand out on college applications , as it continues to measure the writing and analytical skills that are essential to college and career readiness. And, if you want to demonstrate your writing skills even more, you can also consider taking an AP English course .

Related Posts

How to get ready for the digital sat on a school day.

Advanced Placement

What is AP English?

Taking the sat during school, how long does the sat take.

Wyzant Logo

The SAT Essay

Written by tutor ellen s..

The SAT has undergone a significant number of changes over the years, generally involving adjustments in the scoring rubric, and often in response to steadily-declining or increasingly-perfect test scores. When the SAT was changed in 2005, however, they made some significant changes to the test that students see. One of these changes was the addition of the writing section, based on the original SAT II subject test, which includes a timed essay. In including a timed essay on an otherwise multiple-choice test, the SAT throws a problem at students that they are generally unprepared to solve.

Because high school classes usually don’t discuss timed essays, students can have difficulty when faced with the SAT essay. You’ll need a different set of skills to tackle the SAT essay, and ideally a completely separate amount of time to practice those skills. In this lesson I’ll give you an overview of the differences between timed essays and at-home essays, and share my tips for successfully completing a well-organized, well-thought-out SAT essay.

First, the differences. In a timed essay, you’re given the prompt on the spot rather than having an idea of what the topic will be beforehand, as you would if you were writing an essay for an English class. On the SAT, you get one prompt and one prompt only, so you don’t even have the benefit of choosing one that works for you – you have to write about whatever they give you. In addition you’re writing everything out longhand, which eats up more time than you might think and makes it harder to make edits and corrections – particularly if you have bad handwriting and you’re worried about staying legible. And just forget about rearranging paragraphs and reorganizing whole sentences – you’ll never have time for that!

The Difference Between the SAT Essay and At-Home Essays

All of this means that you have to be much more organized right from the get-go than you would be in a natural writing process. You’ll need to read the question, think for a few moments, and then immediately form an opinion so you can start the actual writing as soon as possible. So for all timed essays, and the SAT essay in particular, I strongly emphasize the importance of prewriting. Prewriting can take many forms, from word clouds to concept nets, but for the SAT, I recommend the basic straightforward outline – with a few tweaks. Here’s my formula for SAT essay outlines.

How to Outline Your Essay

First, read the prompt through a couple of times. SAT essay prompts usually follow a set format involving the statement of an opinion, and then asking whether you agree or disagree with that opinion. Let’s take an example from the January 2014 test date, courtesy of the College Board website:

Some see printed books as dusty remnants from the preelectronic age. They point out that electronic books, or e-books, cost less to produce than printed books and that producing them has a much smaller impact on natural resources such as trees. Yet why should printed books be considered obsolete or outdated just because there is something cheaper and more modern? With books, as with many other things, just because a new version has its merits doesn’t mean that the older version should be eliminated.

Assignment: Should we hold on to the old when innovations are available, or should we simply move forward? Plan and write an essay in which you develop your point of view on this issue. Support your position with reasoning and examples taken from your reading, studies, experience, or observations. ( Source. )

he first thing I recommend when confronted with an SAT essay prompt is to ask yourself the question “Do I agree or disagree with the premise of the prompt?” That’ll usually be the last sentence of the first paragraph in the prompt. In this case, do you agree that “just because a new version has its merits doesn’t mean that the older version should be eliminated”? Now write the phrase “I agree” or “I disagree” at the top of your scratch paper accordingly. Put some asterisks around it so you remember to keep checking back in with it during the writing. This opinion is the most important part of your essay, so you want it to be clear in your mind. Next, ask yourself “Why do I agree?” or “Why do I disagree?” The first sentence you say to yourself in response to that question is your rough thesis statement. Jot that down under the first phrase. So, my response to our example would look like this:

* I agree * While the new version might have its merits, the original often has merits of its own.

Again, this is very rough at this stage, but on the SAT you’re trying to prewrite fast, so don’t worry too much about that. On to the body paragraphs!

On a 25-minute essay, you probably won’t have enough time for a full five-paragraph structure with three sub-examples for each point. Two body paragraphs and two examples of each will suffice. You never want to rely on just a single example, though, or you’ll likely lose points for not supporting your statements enough. Write out a template for the body of your essay that looks like this:

I. Main point 1 A. Example 1 B. Example 2 II. Main point 2 A. Example 1 B. Exampple 2

Remember, it’s an outline, so no full sentences. Write only as much as you need to remind yourself of your points. So for our example, my outline would look like this:

I. The “Tangible” aspects A. A book never runs out of battery B. Can read it in the sun, by the pool or in the bathtub – places you wouldn’t want to take a piece of electronics II. The “non-tangible” aspects A. The smell of a new book, tactile sense of turning pages, experience of closing it when you finish B. Ability to get lost in a book, to lose sense of place and become the story

At this point I can see a slight revision I’d make to my original thesis statement, which is the idea that an e-book can never mimic the tactile experience of reading (smelling the book, turning pages, etc.) I’ll quickly adjust my thesis to say:

While the new version might have its merits, the original offers a tactile experience that the new can’t hope to achieve – an experience that can’t be mimicked by technology.

Perfect. All told, your prewriting should have taken you 3 to 5 minutes, most of which was thinking. Now, on to the paper itself!

Writing Your Essay

Okay, here’s my biggest timed-essay secret: don’t start with the introduction. Start by skipping five or six lines down the page, leaving space for an introduction that will be inserted later. Start with your first body paragraph. Work from your outline, converting your points into full sentences and connecting them with transitions, and you should be at a good start. Once both body paragraphs are written, continue on and write your conclusion. Then, go back and write your introduction in the space you left at the beginning. That way, you’ll know what you’re introducing since it’s already written.

I generally recommend about 15 minutes of writing time for the body paragraphs, followed by 5 minutes for the intro and conclusion. Depending on how quickly you got your prewriting done, that leaves you with one or two minutes to look it over, fixing any spelling mistakes or sloppy handwriting. Don’t try to change too much, though – when you’re writing everything out longhand, changes require erasing. We do so much writing on computers these days that sometimes we forget how long it takes to erase a whole sentence and rewrite it. A better tactic is to think through each sentence in your head before you write it down, making sure you have it phrased the way you want it before you put pencil to paper. But don’t spend too long – try it a few times and you’ll find that writing four full paragraphs longhand actually takes about 25 minutes to do – on a good day. You should expect to be writing pretty much continuously for the entire 25 minutes.

Keeping Track of Time, Staying Comfortable, and More Advice

Speaking of which, when you practice your timed essays, pay attention to how your hand feels while you’re writing. The first few times you’ll likely be sore; your hand might even cramp up from writing so hard. It’s tiring to write for that long, so make sure you’re helping yourself. Write lightly on the paper – it’s easy to start pressing down super hard when you’re nervous and panicking. Writing lightly will not only help stave off the hand cramps, it’ll also make erasing much easier when you need to do it. Sit back in your chair while you write – you don’t need to be three inches from your paper to see the words you’re putting down, and hunching over will just make you press harder. Bring your attention to your breathing – are you holding your breath? Why? Try breathing deeply and slowly while you write – it’ll calm your brain and help you think.

Finally, a word about the writing itself – don’t forget you’re on a clock here. Often, you begin to notice as you write that your opinion about the topic is evolving, changing, developing nuances and side areas you want to explore. I know this sounds weird, but you’ve got to try to rein that in – those are all fine things to be thinking about ordinarily, and in an at-home essay I’d say go for it, but you don’t have time to change what you’re writing about in this situation. Sometimes, you’ll even get halfway through a timed essay and realize that you actually don’t agree like you thought you did. Save that thought for later. You’ve got the outline of an organized essay, and that’s what you should be writing. It doesn’t matter at this point if you actually still agree with what you’re saying, all that matters is that you state a clear opinion and communicate it well. After all, the test grader doesn’t even know you – how’s she to know that you don’t really think this anymore? Stay confident and get your original idea out on paper.

For example, the outline I gave above is a perfectly accurate depiction of my opinion on the topic – as it relates to books. However, if we were to start talking about, say, writing essays…I’d probably say that no, I don’t think we should hold on to writing essays out by hand when there are computers available. After all, I’m writing this article on a computer. I’ve copied and pasted multiple paragraphs of information back and forth around this lesson as I was looking for appropriate ways to introduce concepts, and that would have taken forever if I had been writing by hand. But if that thought had occurred to me midway through writing my timed essay about books, I would have acknowledged it for the briefest of moments and then disregarded it. My essay is about books. I’ll just stick to that so I can keep it clean and organized.

Don’t worry about the test graders thinking “But what about X?” – they know you only had 25 minutes and can’t possibly fit every aspect of the argument into that amount of time – or space, for that matter. The scoring rubric focuses on what is present, not what is omitted. As long as you have a clear point of view and are communicating it well, you’ll fulfill their criteria. Remember, this essay’s not in the critical reading section, it’s in the writing section. They’re not in the business of judging the merits of your opinion, just how clearly you’ve communicated it and how well you’ve supported it.

Your timed essays will probably turn out very different than the essays you write at home for class. They might seem stiff, straightforward or brusque; with a limited amount of time you can’t create the subtle, nuanced arguments that your English teachers are probably looking for. But what you can do is create a well-organized, concise presentation of a relatively straightforward point of view, supported by concrete examples that all point toward the same central concept. The SAT essay responds well to a formulaic approach, so while it may take some practice, you will eventually be able to handle a 25-minute essay prompt with confidence.

  • How can we change the underlined section to best set up the information that follows?
  • How can we best change the underlined section, or is it good as it is?
  • The result was an explosion of mural painting that spread throughout California and the southwestern United States in the 1970s. It was the Chicano mural movement
  • What are some strategies I can use to make my writing more persuasive?

Test Prep Toolkit

SAT Essay Prompts (10 Sample Questions)

What does it take to get a high SAT Essay score, if not perfect it? Practice, practice and more practice! Know the tricks and techniques of writing the perfect SAT Essay, so that you can score perfect as well. That’s not a far off idea, because there actually is a particular “formula” for perfecting the SAT Essay test. Consider that every prompt has a format, and what test-takers are required to do remain the same- even if the passage varies from test to test.

The SAT Essay test will ask you to read an argument that is intended to persuade a general audience. You’ll need to discuss how proficient the author is in arguing their point. Analyze the argument of the author and create an integrated and structured essay that explains your analysis.

On this page, we will feature 10 real SAT Essay prompts that have been recently released online by the College Board. You can utilize these Essay SAT prompts as 10 sample SAT Essay questions for easy practice. This set of SAT Essay prompts is the most comprehensive that you will find online today.

The predictability of the SAT Essay test necessitates students to perform an organized analytical method of writing instead of thinking up random ideas on their own. Consider that what you will see before and after the passage remains consistent. It is recommended that you initially read and apply the techniques suggested in writing the perfect SAT Essay (🡨link to SAT Essay —- SAT Essay Overview: How to Get a Perfect Score) before proceeding on using the following essay prompts for practice.

Check our SAT Reading Practice Tests

10 Official SAT Essay Prompts For Practice

10 Official SAT Essay Prompts For Practice

Practice Test 1

“Write an essay in which you explain how Jimmy Carter builds an argument to persuade his audience that the Arctic National Wildlife Refuge should not be developed for industry.”

Practice Test 2

“Write an essay in which you explain how Martin Luther King Jr. builds an argument to persuade his audience that American involvement in the Vietnam War is unjust.”

Practice Test 3

“Write an essay in which you explain how Eliana Dockterman builds an argument to persuade her audience that there are benefits to early exposure to technology.”

Practice Test 4

“Write an essay in which you explain how Paul Bogard builds an argument to persuade his audience that natural darkness should be preserved.”

Practice Test 5

“Write an essay in which you explain how Eric Klinenberg builds an argument to persuade his audience that Americans need to greatly reduce their reliance on air-conditioning.”

Practice Test 6

“Write an essay in which you explain how Christopher Hitchens builds an argument to persuade his audience that the original Parthenon sculptures should be returned to Greece.”

Practice Test 7

“Write an essay in which you explain how Zadie Smith builds an argument to persuade her audience that public libraries are important and should remain open”

Practice Test 8

“Write an essay in which you explain how Bobby Braun builds an argument to persuade his audience that the US government must continue to invest in NASA.”

Practice Test 9

“Write an essay in which you explain how Richard Schiffman builds an argument to persuade his audience that Americans need to work fewer hours.”

Practice Test 10

“Write an essay in which you explain how Todd Davidson builds an argument to persuade his audience that the US government must continue to fund national parks.”

Visit our SAT Writing Practice Tests

What Is An Example Of A SAT Essay That Obtained A Perfect Score?

Example Of A SAT Essay

Here is an example of Practice Test 4 above and how a perfect SAT Essay in response to it looks like. This has been published in the College Board website.

Answer Essay with Perfect Score:

In response to our world’s growing reliance on artificial light, writer Paul Bogard argues that natural darkness should be preserved in his article “Let There be dark”. He effectively builds his argument by using a personal anecdote, allusions to art and history, and rhetorical questions.

Bogard starts his article off by recounting a personal story – a summer spent on a Minnesota lake where there was “woods so dark that [his] hands disappeared before [his] eyes.” In telling this brief anecdote, Bogard challenges the audience to remember a time where they could fully amass themselves in natural darkness void of artificial light. By drawing in his readers with a personal encounter about night darkness, the author means to establish the potential for beauty, glamour, and awe-inspiring mystery that genuine darkness can possess. He builds his argument for the preservation of natural darkness by reminiscing for his readers a first-hand encounter that proves the “irreplaceable value of darkness.” This anecdote provides a baseline of sorts for readers to find credence with the author’s claims.

Bogard’s argument is also furthered by his use of allusion to art – Van Gogh’s “Starry Night” – and modern history – Paris’ reputation as “The City of Light”. By first referencing “Starry Night”, a painting generally considered to be undoubtedly beautiful, Bogard establishes that the natural magnificence of stars in a dark sky is definite. A world absent of excess artificial light could potentially hold the key to a grand, glorious night sky like Van Gogh’s according to the writer. This urges the readers to weigh the disadvantages of our world consumed by unnatural, vapid lighting. Furthermore, Bogard’s alludes to Paris as “the famed ‘city of light’”. He then goes on to state how Paris has taken steps to exercise more sustainable lighting practices. By doing this, Bogard creates a dichotomy between Paris’ traditionally alluded-to name and the reality of what Paris is becoming – no longer “the city of light”, but moreso “the city of light…before 2 AM”. This furthers his line of argumentation because it shows how steps can be and are being taken to preserve natural darkness. It shows that even a city that is literally famous for being constantly lit can practically address light pollution in a manner that preserves the beauty of both the city itself and the universe as a whole

Finally, Bogard makes subtle yet efficient use of rhetorical questioning to persuade his audience that natural darkness preservation is essential. He asks the readers to consider “what the vision of the night sky might inspire in each of us, in our children or grandchildren?” in a way that brutally plays to each of our emotions. By asking this question, Bogard draws out heartfelt ponderance from his readers about the affecting power of an untainted night sky. This rhetorical question tugs at the readers’ heartstrings; while the reader may have seen an unobscured night skyline before, the possibility that their child or grandchild will never get the chance sways them to see as Bogard sees. This strategy is definitively an appeal to pathos, forcing the audience to directly face an emotionally-charged inquiry that will surely spur some kind of response. By doing this, Bogard develops his argument, adding gutthral power to the idea that the issue of maintaining natural darkness is relevant and multifaceted.

Writing as a reaction to his disappointment that artificial light has largely permeated the prescence of natural darkness, Paul Bogard argues that we must preserve true, unaffected darkness. He builds this claim by making use of a personal anecdote, allusions, and rhetorical questioning.

Related Topic:  SAT Requirements

This response scored a 4/4/4.

Reading—4: This response demonstrates thorough comprehension of the source text through skillful use of paraphrases and direct quotations. The writer briefly summarizes the central idea of Bogard’s piece ( natural darkness should be preserved ;  we must preserve true, unaffected darkness ), and presents many details from the text, such as referring to the personal anecdote that opens the passage and citing Bogard’s use of  Paris’ reputation as “The City of Light.” There are few long direct quotations from the source text; instead, the response succinctly and accurately captures the entirety of Bogard’s argument in the writer’s own words, and the writer is able to articulate how details in the source text interrelate with Bogard’s central claim. The response is also free of errors of fact or interpretation. Overall, the response demonstrates advanced reading comprehension.

Analysis—4:  This response offers an insightful analysis of the source text and demonstrates a sophisticated understanding of the analytical task. In analyzing Bogard’s use of personal anecdote, allusions to art and history, and rhetorical questions , the writer is able to explain carefully and thoroughly how Bogard builds his argument over the course of the passage. For example, the writer offers a possible reason for why Bogard chose to open his argument with a personal anecdote, and is also able to describe the overall effect of that choice on his audience ( In telling this brief anecdote, Bogard challenges the audience to remember a time where they could fully amass themselves in natural darkness void of artificial light. By drawing in his readers with a personal encounter…the author means to establish the potential for beauty, glamour, and awe-inspiring mystery that genuine darkness can possess…. This anecdote provides a baseline of sorts for readers to find credence with the author’s claims ). The cogent chain of reasoning indicates an understanding of the overall effect of Bogard’s personal narrative both in terms of its function in the passage and how it affects his audience. This type of insightful analysis is evident throughout the response and indicates advanced analytical skill.

Writing—4: The response is cohesive and demonstrates highly effective use and command of language. The response contains a precise central claim ( He effectively builds his argument by using personal anecdote, allusions to art and history, and rhetorical questions ), and the body paragraphs are tightly focused on those three elements of Bogard’s text. There is a clear, deliberate progression of ideas within paragraphs and throughout the response. The writer’s brief introduction and conclusion are skillfully written and encapsulate the main ideas of Bogard’s piece as well as the overall structure of the writer’s analysis. There is a consistent use of both precise word choice and well-chosen turns of phrase ( the natural magnificence of stars in a dark sky is definite ,  our world consumed by unnatural, vapid lighting ,  the affecting power of an untainted night sky ). Moreover, the response features a wide variety in sentence structure and many examples of sophisticated sentences ( By doing this, Bogard creates a dichotomy between Paris’ traditionally alluded-to name and the reality of what Paris is becoming – no longer “the city of light”, but moreso “the city of light…before 2AM” ). The response demonstrates a strong command of the conventions of written English. Overall, the response exemplifies advanced writing proficiency.

Related Topics:

  • Practice Tests for SAT Reading
  • SAT Writing And Language Practice Tests
  • SAT Languages Test
  • SAT Essay Test  SAT Writing Practice Tests
  • SAT Science Test, Topics & Subjects Content
  • SAT Registration
  • SAT Test Dates
  • SAT vs ACT, Which One Should You Take?
  • Why Take the SAT?

SAT study guide

UWorld College Prep

SAT® Writing Practice Tests and Questions

Our SAT® Writing Practice Tests and Questions are written by subject matter experts to meet or exceed exam-level difficulty, because we believe that if practice feels like the actual exam, the real thing will feel like practice. Not getting something? We’ve got you covered with in-depth answer explanations and vivid illustrations that make hard stuff easy to understand.

* Digital SAT Writing Practice Bank available July 2023!

*The Reading and Writing sections of the SAT will be combined in the US starting March 2024

UWorld Product SAT Writing Practice Questions Dashboard

Benefits of Practicing SAT Writing Exam-Like Questions

Unlimited exam-level practice, customized to your needs, understand the why, sat writing sample questions.

Select a Question sample.

A certain number of mandatory volunteer service hours are required for many high school students to graduate. Such service, be it serving meals at a soup kitchen, when they create crafts with kids at the library, or helping people at a senior citizens’ home, has received a lot of attention and backlash.

Given its definition, volunteer work should be something that people want to do. “To call mandatory community service ‘volunteering’ is a problem because then we begin to confuse the distinction between an activity that is freely chosen and something that is obligatory,” says Linda Graff, president of an international consulting firm. Ruth MacKenzie, former president and CEO of Volunteer Canada, voices similar thoughts: “The mandated nature means this is not really volunteering, and the fear…is that forcing kids to volunteer…might turn them off the concept for the rest of their lives.”

But what about the positives? Research reveals no negative impacts from education administrators’ removing a high schoolers option about whether to volunteer. Supporters of required volunteering who are in favor of it also point to significant research that proves the younger individuals become involved in volunteering, the more likely they are to be lifelong volunteers who care about others, make positive contributions to the community, and have less time for themselves.

Is forcing students to volunteer different from forcing them to learn proper language or science skills? All these skills help define students’ knowledge base and even effect the attitudes that students will carry with them throughout their lives. However, high school does more than prepare students for further education, it also helps with social interaction, equips them for problem-solving in all aspects of life, and often directs students down a lifelong path—career or otherwise.

One theory suggests a correlation between service learning and higher academic achievement. Also, many believe that students who volunteer acquire more transferable skills in a practical setting, making them more employable than the skills of those who lack real-world experience. For example, volunteering allows students to interact with people from other walks of life and to try a variety of tasks to see what they most enjoy. Employers know that the more someone volunteers, the more likely it is that the individual will be a hard worker. Another benefit is that many scholarships have volunteer-hour requirements or, at the very least, are awarded to students who are active participants in their community. Therefore, students who volunteer are much more likely to meet both their educational and career goals.

1. This work, “Forcing Students to Volunteer,” is a derivative of “Is Forcing Students to Volunteer A Good Idea?” in the July 3, 2015, blog on the Charity Republic website. Published with permission. “Forcing Students to Volunteer” is licensed by UWorld.

Also, many believe that students who volunteer acquire more transferable skills in a practical setting, making them more employable than the skills of those who lack real-world experience.

  • than students
  • than the skills gained by students
  • with volunteers
  • Explanation

Also, many believe that students who volunteer acquire more transferable skills in a practical setting, making them more employable than students who lack real-world experience.

Rule : Comparisons should be made between similar things: objects with objects, people with people.

For a sentence to make sense, comparisons should be made between like things . For example: Mechanics who graduate from trade school typically make more money than mechanics who don’t. The parallel structure of the repeated phrase “mechanics who” indicates that similar people are being compared.

Here, “students” parallels a phrase from earlier in the sentence that indicates the students who volunteer are being compared to students “who lack real world experience.” Because “than” is a word that indicates a comparison, the correct answer is than students .

(Choices A and C) Both of these answers incorrectly compare “students” (people) with “skills” (things). (Incorrect: The students know more than their skills. Correct: Our students have more skills than your students.)

(Choice D) This answer doesn’t contain a word that indicates a comparison. Instead, “with” indicates that when students are accompanied by volunteers who don’t have work experience, they are more likely to get a job, which isn’t logical.

Things to remember: Compare people to other people and things to other things. Also, look for an answer choice that provides parallel structure to the other part of the comparison. (Incorrect: He is a bigger rap music fan than country music. Correct: He is a bigger rap music fan than I am.)

A vampire is a thirsty thing, spreading metaphors like antigens, through its victim’s blood. It is a rare situation that is not metaphorically defamiliarized by the introduction of a vampiric motif, whether it be migration and industrial change in Dracula ; adolescent coming-of-age in Twilight ; or racism in True Blood . Beyond undead life and the knack of becoming a bat, the vampire’s true power is its ability to induce intense paranoia about the nature of social relations to ask, “Who are the real bloodsuckers?”

This is certainly the case with the first fully realized vampire story in English, John Polidori’s 1819 tale, “The Vampyre.” It is Polidori’s text that establishes the vampire as we know it. He reimagined the feral, mud-caked creatures of southeastern European legend as the elegantly magnetic denizens swarming all around the cosmopolitan assemblies and polite drawing rooms.

“The Vampyre” is a product of 1816, when Lord Byron left England in the wake of a disintegrating marriage and rumors of madness, to travel to the banks of Lake Geneva and there loiter with Percy and Mary Shelley: then still Mary Godwin. Polidori served as Byron’s travelling physician. He also played an active role in the summer’s tensions and rivalries. He also participated in the famous night of ghost stories that produced Mary Shelley’s “hideous progeny,” Frankenstein .

Like Frankenstein , “The Vampyre” draws extensively on the mood at Byron’s Villa Diodati. But whereas Mary Shelley incorporated the orchestral thunderstorms that illuminated the lake and the sublime mountain scenery that served as a backdrop to Victor Frankenstein’s struggles, Polidori’s text is woven from the invisible dynamics of the Byron-Shelley circle, and especially the humiliations he suffered at Byron’s hand.

The most overt example of Byron as the devourer of souls was a novel Polidori read over the course of the summer— Glenarvon by Lady Caroline Lamb. Byron and Lamb had enjoyed a brief affair until he, somewhat rattled, had called it off. That Polidori took inspiration from Lamb is revealed in the name he gives his villain—Lord Ruthven, one of Glenarvon’s various ancestral titles. Polidori’s Ruthven also inhabits Glenarvon ‘s aristocratic milieu as a member of the bon ton .

Rather than providing validation for his creative outlet, Polidori’s humiliation was only compounded by the publication of “The Vampyre.” Although the text was similarly prompted by the ghost story competition that inspired Mary Shelley so ably, but Polidori only completed his story for the pleasure of a friend outside of the Byron-Shelley circle. The manuscript lay forgotten for three years until finally coming into the hands of the disreputable journalist Henry Colburn, who reported it in his New Monthly Magazine under the title “The Vampyre: A Tale by Lord Byron.”

1. This work, “The Vampyre,” is a derivative of “The Poet, the Physician, and the Birth of the Modern Vampire,” by Andrew McConnell Stott, licensed under CC BY-SA 3.0 by UWorld.

It is a rare situation that is not metaphorically defamiliarized by the introduction of a vampiric motif, whether it be migration and industrial change in Dracula ; adolescent coming-of-age in Twilight ; or racism in True Blood .

  • Dracula , adolescent coming-of-age in Twilight ,
  • Dracula ; adolescent coming-of-age in Twilight ,
  • Dracula , adolescent coming-of-age in Twilight ;

Rule : Commas separate phrases within a list.

Look for the pattern of listed items to help you determine where one ends and the next begins.

sat writing test essay

This list contains a noun followed by a prepositional phrase . Because “adolescent” describes “ Twilight ” instead of “ Dracula, ” a comma is needed to show where the first listed item ends and the next begins. Likewise, the conjunction “or” indicates where the second listed item ends, so there needs to be a comma after Twilight . Therefore, Dracula , adolescent coming-of-age in Twilight , is the correct answer.

(Choices A, C, and D) Semicolons separate items in a list when the listed items already contain commas. (Ex. Ed bought apples, oranges, and grapes at Kroger; meat, dairy, and fish at Albertsons; and toothpaste, soap, and razors at Walgreens.) However, the items in this list (ex. “migration and industrial change”) don’t already contain commas. On the SAT, commas, not semicolons, are usually used to separate items in lists.

Things to remember: Use commas to separate items in a list when those items contain no other punctuation.

Two dancers in wheelchairs faced each other, raising their arms in intricate patterns. Others incorporated crutches or a chair into their actions. The dancers, bailing from around the world, came together for a week in June for UCLA’s Dancing Disability Lab, which was hosted by the world arts and cultures/dance group and lasted for seven days. This lab is a cross-disciplinary initiative designed to reframe again cultural understanding and practices around the concept of disability through classes and community engagement. Each lab builds and strengthens networks of university faculty, staff, and students so that community leaders can transform the discourse and awareness surrounding disability.

Mel Chua, a biomedical engineering student at Georgia Tech, said she was hesitant to apply for the program because she assumed that her previous dance training wasn’t advanced enough. But Chua came to realize that the reason she felt negligibly unqualified was that, as a deaf person, she had never had access to dance training like what she had experienced at the Disability Lab. A first for Chua and many other dancers was getting to dance with a group of exclusively disabled dance artists. Instead of being the only disabled person in the class, feeling graded by disability, or having to translate choreography designed for nondisabled dancers, the participants were united in how they each expanded dancing conventions. Being in a dance workshop where everyone has a disability was empowering and eye-opening for all the dancers.

That said, Chua emphasized that she is not looking to expose others or receive sympathy for the challenges she faces. Although the idea of inclusion often focuses on bringing disabled and nondisabled people together, Chua believes it’s important for disabled people to have spaces that are just their own. The lab gives disabled artists a chance to be heard and seen differently than some might be accustomed to—a necessary step in cinching that nondisabled persons will be allies and provide ongoing support for equal access and inclusion.

1. This work, “Program for Disabled Dancers,” is a derivative of “Disabled dancers learn to redefine the aesthetics of movement at UCLA” by Robin Migdol in UCLA’s Newsroom on September 5, 2019, and used with permission. “Program for Disabled Dancers” is licensed by UWorld.

Although the idea of inclusion often focuses on bringing disabled and nondisabled people together, Chua believes it’s important for disabled people to have spaces that are just their own.

Determine the right word or phrase by seeing which one makes sense in context .

In general, “that” introduces a clause that describes the noun immediately before it. (Ex. My parents have a lake house that we enjoy on the weekends.) Because “are just their own” describes “spaces,” the correct answer is “that” or NO CHANGE .

(Choice B) “And which” is used when one descriptive clause follows another one. (Ex. Tomorrow is the day of my test, which I’ve been dreading and which I must pass to graduate.) Without another descriptive clause, there’s nothing that the “and which” can follow.

(Choices C and D) Both “to which” and “of which” are prepositional phrases , which would make “which” the object of a preposition . Because prepositional objects can’t be the subject of a clause, the dependent clause would be left without a subject.

Things to remember: “That” introduces an essential clause describing the noun immediately before it.

Rather than providing validation for his creative outlet, Polidori’s humiliation was only compounded by the publication of “The Vampyre.”

  • the publication of “The Vampyre” only served to compound Polidori’s humiliation.
  • the humiliation of Polidori was only compounded when “The Vampyre” was published.
  • the compounding of Polidori’s humiliation happened with the publication of “The Vampyre.”

Rule : The first noun following an introductory phrase should be the person or thing that phrase describes.

Look at the first noun of each answer choice to see which one could be described by the introductory information .

sat writing test essay

Based on the introductory phrase , “Rather than providing a creative outlet,” ask yourself, which answer choice begins with a word that could provide a creative outlet for Polidori? The answer is the publication of his story. The correct answer, then is the choice in which “publication” is the first noun : the publication of “The Vampyre” only served to compound Polidori’s humiliation .

(Choices A, C, and D) None of these answers begin with a noun that could provide a creative outlet.

  • Choices A and C: The first noun in both these answers is “humiliation” (extreme embarrassment), which wouldn’t provide a creative outlet. “Polidori” is mentioned in Choice A to describe whose humiliation is referenced.
  • Choice D: “Compounding” (making something worse) functions as a gerund and the first noun after the introductory phrase. When something is made worse, it doesn’t make sense that it also provides a creative outlet, which is generally positive.

Things to remember: To communicate clearly, the first noun after an introductory phrase should be what is described by that phrase.

YouTube artist Jon Cozart asks, “Do you ever wonder why Disney tales all end in lies?” in his 2013 musical parody. Cozart responds to the question with a catchy and humorous, but slightly shocking, series of answers about what he thinks could have happened after Ariel, Jasmine, Belle, and Pocahontas experienced their “happily ever afters.” The medley was published on a musical video-sharing site, where it has been watched over 61 million times since its publication. The video— titled “After Ever After,” reimagines these four self-aware Disney princesses in our real world and speculates about how they would handle this harsh and difficult reality.

This formulaic ending for protagonists of the fairy tale has been persuasive to the genre. The development of postmodernism and feminism in recent decades has resulted in an audience that is less willing to accept that standard and unsatisfying conclusion. Due to this dissatisfaction, revisionist versions of classic stories have become popular. A combination of fairy tale scholarship, new media and amateur media studies, folklore, and cultural studies adds to the analysis of this form of fairy tale revision, which aligns to the more realistic world view reflected in Cozart’s video.

While there has recently been a surge in fairy tale retellings through television shows, movies, and books to meet this contemporary demand, the feminist views emerging on the videos of YouTube.com allow an individual to create and broadcast material to a worldwide audience from the comfort of his or her own home. Being comfortable is important to making popular movies.

Cozart parodies the plots of four animated Disney movies with recognizable music from the original films. Many find this compilation to be artistic, humorous, and extremely catchy. Others question whether the familiar characters and unique presentation might satirically critique the politics, environmentalism, racism, and colonialism of Western society. Cozart’s perspective as the creator is that of a young American male, but his audience is expanded by the content of his parody and the platform through which the material was produced.

This case study, of Cozart’s first “After Ever After” video examines the use of Disney heroines as spokespersons of Cozart’s digital parody, which can be considered quite funny to some people but very offensive to others. Cozart is one of many who comment on society by making use of “the end” as a new beginning. In doing so, he retains some aspects of “classic Disney” while subverting much of the sense of wonder that gives the original genre its name.

1. This work is a derivative of "’After Ever After’: Social Commentary through a Satiric Disney Parody for the Digital Age" by Kylie Schroeder published in Humanities 2016, 5(3), 63; doi:10.3390/h5030063, an Open Access document. Licensed under CC BY 4.0 by UWorld.

This case study, of Cozart’s first “After Ever After” video examines the use of Disney heroines as spokespersons of Cozart’s digital parody, which can be considered quite funny to some people but very offensive to others.   Cozart is one of many who comment on society by making use of “the end” as a new beginning.  In doing so, he retains some aspects of “classic Disney” while subverting much of the sense of wonder that gives the original genre its name.

  • which question how these four young girls contributed to the lies.
  • which functions as social, historical, political, and environmental commentary.
  • although his motivations for changing these stories is not really clear.

P6:  This case study, of Cozart’s first “After Ever After” video examines the use of Disney heroines as spokespersons of Cozart’s digital parody, which functions as social, historical, political, and environmental commentary.   Cozart is one of many who comment on society by making use of “the end” as a new beginning.  In doing so, he retains some aspects of “classic Disney” while subverting much of the sense of wonder that gives the original genre its name.

Reread the paragraph and summarize what is being discussed: this will be the main point of the paragraph. Select the answer choice that has a similar idea .

This paragraph discusses how Cozart’s Disney heroines are used as spokespersons to comment on society and to subvert (ruin) the sense of wonder that Disney movies often portray. The main point of the paragraph, then, is that Cozart’s video is commenting on various aspects of society’s culture. Therefore, the correct answer is the one that says that Cozart’s digital parody functions as social, historical, political, and environmental commentary .

(Choice A) Although people’s reactions to parodies do vary, that difference doesn’t reflect the main point of the rest of the paragraph.

(Choice B) P1, not P6, focuses on how fairy tales end in lies, but it does not discuss how these four young women might have “contributed to the lies.”

(Choice D) The paragraph doesn’t reflect on Cozart’s reasons for making the videos, so anything about his motivation for making them not being clear isn’t relevant to what’s being discussed.

Things to remember: Determine the main point of the paragraph you’re trying to support and choose the answer with related information.

Enjoying our questions? Receive them every week, absolutely free!

Signup to receive free UWorld questions every week

Get 400 on the SAT Writing Section with UWorld

Study content crafted by subject matter experts, assess yourself with exam-level practice, measure your performance, target your weaknesses with focused study, and repeat.

“I increased my SAT score in three weeks from 1000 to 1320.”

Create Unlimited Practice Tests with Exam-Level Questions

UWorld SAT Writing question with answer choices

  • Generate unlimited practice tests from hundreds of exam-level questions.
  • Target your weaknesses by customizing a test based on topic, custom tag, or even just questions you’ve previously gotten wrong.

Understand Why an Answer is Right or Wrong

  • It's not enough to know the right answer. You must understand why it's right or wrong.
  • That's why we provide in-depth answer explanations complemented by professionally produced images for every answer choice.

Explanation with correct answer and alternate answer options to Writing question

Track Your Progress With Advanced Analytics

Use UWorld’s score predictor to see where you have the most potential for growth

  • Instead of reviewing what you already know, channel your study sessions towards personal growth and progress.
  • Measure your progress with score predictors, identify your weaknesses in real time, compare your results to your peers, and receive feedback summaries for each test.

Features: My Notebook & Flashcards

  • Click, highlight, and select to create notes or flashcards. It’s that easy to transfer QBank visual and written content to your digital My Notebook and Flashcards.
  • Tailor study aids to your learning style and keep them all organized by topic or custom tag.
  • The best part? These learning aids are always in arms reach with our mobile app.

Build your own flashcards to enhance your recall of SAT verbal rules

We make the real thing feel like practice

We believe that if practice feels like the actual exam, then the actual exam will feel like practice. UWorld simulates the actual SAT, so you’ll have all the confidence you need come exam day.

Practice SAT Writing Sample Questions Anywhere at Any Time

Apple App Store

Great SAT Writing Scores make Students Happy

testimonial quote

Frequently Asked Questions (FAQs)

We use cookies to learn how you use our website and to ensure that you have the best possible experience. By continuing to use our website, you are accepting the use of cookies. Learn More

SAT Writing and Language: Practice tests and explanations

The SAT writing and language test consists of 44 multiple-choice questions that you'll have 35 minutes to complete. The questions are designed to test your knowledge of grammatical and stylistic topics.

The SAT Writing and Language questions ask about a variety of grammatical and stylistic topics. If you like to read and/or write, the SAT may frustrate you a bit because it may seem to boil writing down to a couple of dull rules.

  • 30 SAT Grammar Practice Tests

SAT Writing and Language Practice Tests

  • SAT Writing and Language Practice Test 1
  • SAT Writing and Language Practice Test 2
  • SAT Writing and Language Practice Test 3
  • SAT Writing and Language Practice Test 4
  • SAT Writing and Language Practice Test 5
  • SAT Writing and Language Practice Test 6
  • SAT Writing and Language Practice Test 7
  • SAT Writing and Language Practice Test 8
  • New SAT Writing and Language Practice Test 9
  • New SAT Writing and Language Practice Test 10
  • New SAT Writing and Language Practice Test 11
  • New SAT Writing and Language Practice Test 12
  • New SAT Writing and Language Practice Test 13
  • New SAT Writing and Language Practice Test 14
  • New SAT Writing and Language Practice Test 15
  • New SAT Writing and Language Practice Test 16
  • New SAT Writing and Language Practice Test 17
  • New SAT Writing and Language Practice Test 18
  • New SAT Writing and Language Practice Test 19
  • SAT Writing and Language Practice Test: A Sweet Discovery
  • SAT Writing and Language Practice Test: René Descartes: The Father of Modern Philosophy
  • SAT Writing and Language Practice Test: The Novel: Introspection to Escapism
  • SAT Writing and Language Practice Test: Interning: A Bridge Between Classes and Careers
  • SAT Writing and Language Practice Test: In Defense of Don Quixote
  • SAT Writing and Language Practice Test: Women's Ingenuity
  • SAT Writing and Language Practice Test: Working from Home: Too Good to Be True?
  • SAT Writing and Language Practice Test: Is Gluten-Free the Way to Be?
  • SAT Writing and Language Practice Test: Antarctic Treaty System in Need of Reform
  • SAT Writing and Language Practice Test: Finding Pluto
  • SAT Writing and Language Practice Test: Public Relations: Build Your Brand While Building for Others
  • SAT Writing and Language Practice Test: Film, Culture, and Globalization
  • SAT Writing and Language Practice Test: Vitamin C—Essential Nutrient or Wonder Drug?
  • SAT Writing and Language Practice Test: The Familiar Myth
  • SAT Writing and Language Practice Test: America's Love for Streetcars
  • SAT Writing and Language Practice Test: Educating Early
  • SAT Writing and Language Practice Test: The Age of the Librarian
  • SAT Writing and Language Practice Test: Unforeseen Consequences: The Dark Side of the Industrial Revolution
  • SAT Writing and Language Practice Test: Remembering Freud
  • SAT Writing and Language Practice Test: Success in Montreal
  • SAT Writing and Language Practice Test: Sorting Recyclables for Best Re-Use
  • SAT Writing and Language Practice Test: Interpreter at America's Immigrant Gateway
  • SAT Writing and Language Practice Test: Software Sales: A Gratifying Career
  • SAT Writing and Language Practice Test: The Art of Collecting
  • SAT Writing and Language Practice Test: The UN: Promoting World Peace
  • SAT Writing and Language Practice Test: DNA Analysis in a Day
  • SAT Writing and Language Practice Test: Will You Succeed with Your Start-Up?
  • SAT Writing and Language Practice Test: Edgard Varèse's Influence
  • SAT Writing and Language Practice Test: From Here to the Stars
  • SAT Writing and Language Practice Test: The UK and the Euro
  • SAT Writing and Language Practice Test: Coffee: The Buzz on Beans
  • SAT Writing and Language Practice Test: Predicting Nature's Light Show
  • New SAT Writing and Language Practice Test 20
  • New SAT Writing and Language Practice Test 21
  • New SAT Writing and Language Practice Test 22
  • New SAT Writing and Language Practice Test 23
  • New SAT Writing and Language Practice Test 24
  • New SAT Writing and Language Practice Test 25
  • New SAT Writing and Language Practice Test 26
  • New SAT Writing and Language Practice Test 27
  • New SAT Writing and Language Practice Test 28
  • New SAT Writing and Language Practice Test 29
  • New SAT Writing and Language Practice Test 30
  • New SAT Writing and Language Practice Test 31
  • SAT Writing and Language Practice Test: Physician Assistants
  • SAT Writing and Language Practice Test: Maria Montessori
  • SAT Writing and Language Practice Test: Platonic Forms
  • SAT Writing and Language Practice Test: The Eureka Effect
  • SAT Writing and Language Practice Test: The Carrot or the Stick?
  • SAT Writing and Language Practice Test: The Promise of Bio-Informatics
  • SAT Writing and Language Practice Test: What is Art?
  • SAT Writing and Language Practice Test: The Little Tramp
  • SAT Writing and Language Practice Test: Who Really Owns American Media?
  • SAT Writing and Language Practice Test: The Dangers of Superstition
  • SAT Writing and Language Practice Test: Skepticism and the Scientific Method
  • SAT Writing and Language Practice Test: The Magic of Bohemia
  • SAT Writing and Language Practice Test: Careers in Engineeringd
  • SAT Writing and Language Practice Test: An American Duty
  • SAT Writing and Language Practice Test: Idol Worship in Sports
  • SAT Writing and Language Practice Test: The Secret Life of Photons
  • SAT Writing and Language Practice Test 32: The Romani People
  • SAT Writing and Language Practice Test 33: Into the Abyss
  • SAT Writing and Language Practice Test 34: The Doctor Is In
  • SAT Writing and Language Practice Test 35: Maslow's Hierarchy and Violence
  • SAT Writing and Language Practice Test 36: Folklore
  • SAT Writing and Language Practice Test 37: Age of the Drone
  • SAT Writing and Language Practice Test 38: Policing Our Planet
  • SAT Writing and Language Practice Test 39: The Bullroarer
  • SAT Writing and Language Practice Test 40: Astrochemistry
  • SAT Writing and Language Practice Test 41: Blood Ties
  • SAT Writing and Language Practice Test 42: Out with the Old and the New
  • SAT Writing and Language Practice Test 43: Extra, Extra
  • SAT Writing and Language Practice Test 44: Parthenon
  • SAT Writing and Language Practice Test 45: Where Have all the Cavemen Gone?
  • SAT Writing and Language Practice Test 46: Chiroptera
  • SAT Writing and Language Practice Test 47: The Tyrannical and the Taciturn
  • SAT Writing and Language Practice Test 48
  • SAT Writing and Language Practice Test 49
  • SAT Writing and Language Practice Test 50: The Giants of Theater
  • SAT Writing and Language Practice Test 51: Gravity, It's Everywhere
  • SAT Writing and Language Practice Test 52: Do the Numbers Lie?
  • SAT Writing and Language Practice Test 53: Draw Your Home
  • SAT Writing and Language Practice Test 54: The Online Job Hunt
  • SAT Writing and Language Practice Test 55: The Glass Menagerie
  • SAT Writing and Language Practice Test 56: For Richer or For Poorer
  • SAT Writing and Language Practice Test 57: Hypocrisy of Hippocratic Humorism

New SAT SAT Writing & Language Practice Tests Pdf Download

  • New SAT Writing & Language Practice Test 1
  • New SAT Writing & Language Practice Test 2
  • New SAT Writing & Language Practice Test 3
  • New SAT Writing & Language Practice Test 1 Answer Explanations
  • New SAT Writing & Language Practice Test 2 Answer Explanations
  • New SAT Writing & Language Practice Test 3 Answer Explanations
  • New SAT Writing & Language Practice Test 4 pdf download
  • New SAT Writing & Language Practice Test 5 pdf download
  • New SAT Writing & Language Practice Test 6 pdf download
  • New SAT Writing & Language Practice Test 7 pdf download
  • New SAT Writing & Language Practice Test 8 pdf download
  • New SAT Writing & Language Practice Test 9 pdf download

More Information

  • HOW TO ACE THE SAT WRITING AND LANGUAGE TEST: A STRATEGY
  • Introduction to SAT Writing and Language Strategy
  • The SAT Writing and Language Test-Words
  • The SAT Writing and Language Test-Words and Punctuation in Reverse
  • The SAT Writing and Language Test-Punctuation
  • The SAT Writing and Language Test-Precision Questions
  • The SAT Writing and Language Test-Consistency Questions

If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

To log in and use all the features of Khan Academy, please enable JavaScript in your browser.

sat writing test essay

  • SAT Exam Info
  • About the Digital SAT
  • What's a Good SAT Score?
  • What's Tested: SAT Math
  • What's Tested: SAT Vocab
  • What's Tested: SAT Reading & Writing
  • What's Tested: SAT Essay
  • SAT Test Dates
  • SAT Study Plans
  • Downloadable Study Guide
  • SAT Math Tips and Tricks
  • SAT Writing Tips and Tricks
  • SAT Reading Tips and Tricks
  • SAT Question of the Day
  • SAT Pop Quiz
  • SAT 20-Minute Workout
  • Free SAT Practice Test
  • SAT Prep Courses

SAT Essay Scoring Rubric

Sat essay scoring criteria.

  • Demonstrates little or no comprehension of the source text
  • Fails to show an understanding of the text’s central idea(s), and may include only details without reference to central idea(s)
  • May contain numerous errors of fact and/or interpretation with regard to the text
  • Makes little or no use of textual evidence
  • Demonstrates  some comprehension of the source text
  • Shows an understanding of the text’s central idea(s) but not of important details
  • May contain errors of fact and/or interpretation with regard to the text
  • Makes limited and/or haphazard use of textual evidence

Three Points

  • Demonstrates  effective comprehension of the source text
  • Shows an understanding of the text’s central idea(s) and important details
  • Is free of substantive errors of fact and interpretation with regard to the text
  • Makes appropriate use of textual evidence

Four Points

  • Demonstrates  thorough comprehension of the source text
  • Shows an understanding of the text’s central idea(s) and most important details and how they interrelate
  • Is free of errors of fact or interpretation with regard to the text
  • Makes skillful use of textual evidence
  • Demonstrates little or no cohesion and inadequate skill in the use and control of language
  • May lack a clear central claim or controlling idea
  • Lacks a recognizable introduction and conclusion; does not have a discernible progression of ideas
  • Lacks variety in sentence structures; sentence structures may be repetitive; demonstrates general and vague word choice; word choice may be poor or inaccurate; may lack a formal style and objective tone
  • Shows a weak control of the conventions of standard written English and may contain numerous errors that undermine the quality of writing
  • Demonstrates little or no cohesion and limited skill in the use and control of language
  • May lack a clear central claim or controlling idea or may deviate from the claim or idea
  • May include an ineffective introduction and/or conclusion; may demonstrate some progression of ideas within paragraphs but not throughout
  • Has limited variety in sentence structures; sentence structures may be repetitive; demonstrates general and vague word choice; word choice may be repetitive; may deviate noticeably from a formal style and objective tone
  • Shows a limited control of the conventions of standard written English and contains errors that detract from the quality of writing and may impede understanding
  • Is mostly cohesive and demonstrates effective use and control of language
  • Includes a central claim or implicit controlling idea
  • Includes an effective introduction and conclusion; demonstrates a clear progression of ideas both within paragraphs and throughout the essay
  • Has variety in sentence structures; demonstrates some precise word choice; maintains a formal style and objective tone
  • Shows a good control of the conventions of standards written English and is free of significant errors that detract from the quality of writing
  • Is cohesive and demonstrates highly effective use and command of language
  • Includes a precise central claim
  • Includes a skillful introduction and conclusion; demonstrates a deliberate and highly effective progression of ideas both within paragraphs and throughout the essay
  • Has a wide variety in sentence structures; demonstrates consistent use of precise word choice; maintains a formal style and objective tone
  • Shows a strong command of the conventions of standards written English and is free or virtually free of errors
  • Offers little or no analysis or ineffective analysis of the source text and demonstrates little to no understanding of the analytical task
  • Identifies without explanation some aspects of the author’s use of evidence, reasoning, and/or stylistic and persuasive elements, and/or feature(s) of the student’s own choosing
  • Numerous aspects of analysis are unwarranted based on the text
  • Contains little or no support for claim(s) or point(s) made, or support is largely irrelevant
  • May not focus on features of the text that are relevant to addressing the task
  • Offers no discernible analysis (e.g., is largely or exclusively summary)
  • Offers limited analysis of the source text and demonstrates only partial understanding of the analytical task
  • Identifies and attempts to describe the author’s use of evidence, reasoning, and/or stylistic and persuasive elements, and/or feature(s) of the student’s own choosing, but merely asserts rather than explains their importance
  • One or more aspects of analysis are unwarranted based on the text
  • Contains little or no support for claim(s) or point(s) made
  • May lack a clear focus on those features of the text that are most relevant to addressing the task
  • Offers an effective analysis of the source text and demonstrates an understanding of the analytical task
  • Competently evaluates the author’s use of evidence, reasoning, and/or stylistic and persuasive elements, and/or features of the student’s own choosing
  • Contains relevant and sufficient support for claim(s) or point(s) made
  • Focuses primarily on those features of the text that are most relevant to addressing the task
  • Offers an insightful analysis of the source text and demonstrates a sophisticated understanding of the analytical task
  • Offers a thorough, well-considered evaluation of the author’s use of evidence, reasoning, and/or stylistic and persuasive elements, and/or features of the student’s own choosing
  • Contains relevant, sufficient, and strategically chosen support for claim(s) or point(s) made
  • Focuses consistently on those features of the text that are most relevant to addressing the task

The Scoring Process

You might also like.

What is tested on the SAT vocabulary section

Call 1-800-KAP-TEST or email [email protected]

Prep for an Exam

MCAT Test Prep

LSAT Test Prep

GRE Test Prep

GMAT Test Prep

SAT Test Prep

ACT Test Prep

DAT Test Prep

NCLEX Test Prep

USMLE Test Prep

Courses by Location

NCLEX Locations

GRE Locations

SAT Locations

LSAT Locations

MCAT Locations

GMAT Locations

Useful Links

Kaplan Test Prep Contact Us Partner Solutions Work for Kaplan Terms and Conditions Privacy Policy CA Privacy Policy Trademark Directory

sat writing test essay

2024 SAT Changes: What You Need To Know

The SAT has long been synonymous with Scantron bubble sheets and #2 pencils. But on December 2, 2023, the paper-and-pencil SAT will see its final administration, and starting in March, 2024, the SAT is going completely digital. One thing not changing is the impact that a strong SAT performance can have on students’ college (and scholarship) applications, so as the test moves into its new era, let’s take a close look at the changes and, most importantly, what it means for test-takers in 2024 and beyond.

What Is Changing

The Digital SAT marks the biggest change to the SAT in a generation, if not many generations. The digital nature is one big change, but it’s far from the only change and it won’t likely be the one that impacts your score–or your SAT vs. ACT decision–the most. Here’s a summary of the major changes you should know about:

  • Digital Format. This is the change that’s getting all the headlines. Like so many things in the old analog world, the paper-and-pencil test booklets, Scantron bubble sheets, and #2 pencils are giving way to laptops, tablets, and point-and-click tools. (Not to worry: you can still use your lucky #2 pencil for your scratchwork if you’d like!)
  • New-Look Reading & Writing Section. This is the update that seems most likely to change how students study. Once separate sections, Reading & Writing will now appear together in the same section, and they’ll do so without their trademark long (up to 750-word) passage format. On the new test, each question will have its own short (150 words maximum) prompt.
  • Reading Question Types. Along with the shortened Reading prompts comes a significantly different set of questions. Gone are the evidence-based pair questions and the paper-and-pencil “vocabulary in context” type (though some vocabulary/diction questions will still appear in a different form). The test will feature a new emphasis on supporting (and sometimes weakening) claims and hypotheses, and feature an all-new question type that asks examinees to synthesize a student’s notes.
  • Shorter “Everything. ” Reading and writing prompts will be dramatically shorter, and the duration of the test will be significantly shorter, too. The test is shrinking from 3 hours long to a total testing time of 2 hours, 14 minutes, and from 154 total questions down to 98.
  • Well, Almost Everything. While both the duration of the test and the number of questions will be reduced, the time allotted per question will actually increase a bit.
  • Expanded Calculator Use. The SAT is also doing away with the “no calculator” math section, and in the digital format even providing an on-screen graphing calculator for students to use (though students can still use their own calculator if they’d prefer, provided it meets the test’s rules ).
  • Adaptive Sections. You might ask how the SAT can ask over 35% fewer questions and still provide accurate scores. The primary factor is that the questions won’t be the same for everyone. Each student will see two Math sections and two Reading & Writing sections, and the difficulty of the second section in each discipline will vary based on the student’s performance on the first. Higher performers will see more difficult second sections with a higher maximum point value available, and those who didn’t perform as well will see more moderately-difficult second sections without as high a potential score available.

What Is Not Changing

Of course, not everything is changing. The College Board is quite confident that the scoring scale will remain the same–e.g. a 1400 next year will represent the same ability level as a 1400 did last year–and that schools will view performance the same way. And that’s because much of the test content and philosophy remains the same. Let’s break down what’s staying the same.

  • Math Content Coverage. The math sections will change in number of questions and pace-per-question, and students will have access to a calculator throughout, but the same topics and question types (including numeric entry) will still apply.
  • (Most) Writing Content Coverage. Generally speaking, the same grammar rules and principles of rhetoric will be tested on the new Reading & Writing section, just with a new look and feel (single questions vs. longer passages, and no “NO CHANGE” incumbent option). A few hallmarks of the old, longer Writing section–most notably questions that ask how an author should order sentences and/or whether the author should add or delete a sentence–seem to be going away.
  • Scoring Scale. The adaptive nature of test sections slightly changes the way that scores are calculated, given that the difficulty of questions now factors in compared to a simple number correct/incorrect, but the scores will still be on a 400-1600 scale and, generally speaking, the criteria for “what’s a good score” at your target schools will remain constant, too.

What That Means For You

‍ Practicing with the Digital SAT tools is important. By 2024, most students should be more than comfortable with the concept of doing academic work on a computer or tablet. So it’s not the mere fact that the SAT is digital that’s important–it’s how aware and comfortable a student is with the specific tools themselves. Notably, these tools include:

  • An optional on-screen graphing calculator. Powered by Desmos, this calculator is a great option for many students–but will work a bit differently from your handheld graphing calculator. To decide which you prefer, you’ll want to practice with both options. And if you plan to use the on-screen calculator, you’ll want the quick muscle memory to graph, calculate, backspace, and clear without having to think about the functions.
  • An annotation tool to add notes to text. Since you can’t circle words or jot notes like you might on the paper test, this tool exists to let you add notes on the screen. But it’s only available for the Reading & Writing section and you’ll want to test it out to see how well it works for you and ensure that you’re able to use it quickly.
  • A flagging tool to highlight questions to return to. Along with the menu to see all questions, this enables you to hop between questions to manage your time, but again you’ll want to build speed with it so that it works, as intended, as a time-saver and not a time-waster.
  • A countdown clock. You can toggle between “hide” and “show” the allotted time remaining. The clock can be distracting to some students if it’s constantly ticking in front of you; others like having it persistently there. To know what works for you–and to have a plan for when you’ll toggle it on vs. off to check your pace–again you’ll want to practice.
  • A formula reference sheet for math. Make sure you know which formulas are available and which are not.
  • Keyboard shortcuts. If you plan to use these shortcuts, as with these other tools make sure you spend the time beforehand to get them to the point where they really do save you time on test day.

Reading questions require specific preparation. For 11th graders who took a fall, paper SAT, or anyone who’s borrowed test prep materials from the earlier version, the Reading questions in particular will look a lot different and require a significantly different skill set. Most notably:

  • With 54 total verbal questions, you’ll face an incredible variety of topics and have to context-switch often. Pro tip: read the question stems first so that you know your “job” prior to reading each new topic.
  • The question type that asks you to synthesize notes is all-new and unique to the SAT. Pro tip: the purpose outlined in the question stem is the most important phrase of all.
  • Short-form passages require a more narrow type of reading comprehension. Many questions will come down to one or two key words in a high-leverage part of the question so you’ll want to train yourself to focus on that precision in language and to know where on each type of question to most direct your focus. Pro tip: when you’re asked to support a theory or claim, the specific adjectives and modifiers in that theory/claim really matter.
  • Vocabulary/diction questions no longer ask you for the meaning of a word, but instead to fit the proper word into a blank. Pro tip: the whole prompt matters, so make sure you understand the context from the sentences that don’t have the blank, too.  

Minimize mistakes and pacing issues. The shortened, adaptive SAT test format puts a bit of extra emphasis on making every question count. Fewer questions means that a silly or careless mistake makes up a higher percentage of your score than before, and the adaptive nature of the test means that a mistake like that could have even outsized importance. Here’s why:

  • A shorter test magnifies mistakes. On a longer test, one careless mistake gets diluted by your performance on so many more questions. And the same is true of timing: over the course of a longer test, you have more time to make up for a wasted minute. The shorter the test, the more a single mistake drags you back from your true ability.
  • Section adaptivity can, in some cases, really magnify a mistake. By and large you shouldn’t worry about the adaptivity of the SAT at all (more on that later). But one thing you should know is that there is a dividing line on your first section of each discipline that determines whether you get the advanced second section and its higher potential point value. And if one or two silly mistakes drag you behind that line, that could artificially cap your score by not giving you a chance at that advanced second section.

Importantly, this doesn’t mean that you should be intimidated or paranoid throughout each section! But what it does mean is that 1) you should use practice tests to identify the types of mistakes you make under pressure so that you’re aware of them on the exam, 2) you should practice pacing so that you have a plan to not run short on time and miss questions you should get right, and 3) you should use any extra time you have double-checking for common mistakes so that you don’t “give back” any points that should be yours.

What that doesn’t mean for you

Don’t try to game the adaptive algorithm. With any adaptive test, there’s always a temptation to spend more time trying to hack the algorithm than studying to just rack up correct answers. And do you know what the algorithm favors most? Correct answers–they’re the best way to “hack” your way to a high score.

Adaptive testing is new to the SAT but has been used for a great many tests over decades, and the story is always the same: the time and focus you spend trying to gain an edge doesn’t gain you any points, whereas that same time and energy spent on shoring up shaky skills can really improve your score. Trust the psychometricians (standardized test data scientists) and work to build your skills and familiarity with the questions.

Don’t (completely) throw out old test prep materials. If your sister or friend swears by her flashcards or test-taking strategies, you can still largely put them to good use! Anything related to math, grammar, testing strategy (e.g. using answer choices as assets or plugging in numbers for certain algebra problems) can still very much help you. Just know that Reading is dramatically different, Writing questions look a lot different, and you’ll want to make sure you do a lot of practice with Digital SAT specific problems…the other tools can serve as a good supplement.

Don’t spend much time comparing Old SAT vs. New SAT. As of the evening of December 2 – the last date of the paper-and-pencil SAT – there’s just one SAT and it’s digital. Comparing the tests is a recipe for confusion (even now, decades after the SAT tested vocabulary through analogies and featured a, let’s say, “disincentive” for guessing, there are students getting bad advice from people who wax nostalgic for those features of a long-since-retired SAT). The best way to study and to help students is to focus on the one-and-only SAT and avoid comparisons altogether; the vast majority of students who will take the Digital SAT in 2024 will have never taken the older version, anyway.

Why is the SAT changing?

In short, because of competition. Colleges view SAT and ACT scores the same way, so the choice between taking the SAT and ACT comes down to test-taker preference (or state administrator preference when a state chooses one of these tests as a statewise, in-school exam). The College Board is betting that a shorter test–with the potential for more testing dates and locations, and with built-in technology tools for a digital generation–will encourage more students to take the SAT either instead of or in addition to the ACT. Which brings up the question:

Should you take the ACT or the Digital SAT?

The last time the SAT changed, in 2016, it became a lot more like the ACT. This time? It’s creating a lot of distance between the two. With two significantly different tests, which one should you choose? Here’s a breakdown of the two:

The ACT is longer, broader, and faster.

Not only does the ACT have longer sections with more questions, it also has:

  • Longer reading passages. The SAT has shortened its Reading & Writing passages to 150 words or less; the ACT Reading and English passages can still go up to 800.
  • A shorter pace-per-question. ACT Math gives you one minute per question; the SAT gives you about one and a half. ACT Reading, English, and Science give you a little under a minute per question, while SAT Reading & Writing gives you a little over. The ACT rewards those who work quickly.
  • A wider array of math skills, particularly for more advanced subjects. The ACT includes questions on vectors, matrices, and reciprocal trig functions–topics that don’t appear on the SAT.

The SAT leaves less room for error.

Of course, on the SAT you’re not the only one getting more time per question and fewer math rules to know. Everyone takes the same test, and the scoring scale reflects that. So know that:

  • The SAT’s shorter reading prompts require even more attention to detail to get what you need out of short, dense text.
  • The SAT’s more time-per-question means that you’ll need to use it to avoid and correct mistakes.
  • As mentioned earlier, with fewer questions, any one mistake on a question you should have gotten right makes up a larger proportion of your score.
  • SAT Math includes about 25% non-multiple-choice questions, where you have to supply a number as your answer. Educated guessing and process of elimination can’t help you there.

So what’s the verdict?

Take the SAT if:

  • You like to take tests more slowly and methodically, and when you can take your time your accuracy is quite high.
  • You find it difficult to focus on longer reading prompts, particularly in a timed, high-pressure situation.
  • You’re comfortable hopping between short tasks on different topics (like you’ll have to on SAT Reading & Writing)

Take the ACT if:

  • You tend to work quickly on tests.
  • You’re prone to occasional mistakes, but overall comfortable with a broad range of challenging math skills.
  • You’d prefer a handful of longer reading tasks over a lot of short ones

Get Started Today

Maximize Your Potential

Unlock your learning opportunities with Varsity Tutors! Whether you’re preparing for a big exam or looking to master a new skill, our tailored 1:1 tutoring sessions and comprehensive learning programs are designed to fit your unique needs. Benefit from personalized guidance, flexible scheduling, and a wealth of resources to accelerate your education.

girl smiling

Related Posts

sat writing test essay

What's on the SAT

Here's what's on each section of the SAT and how it's structured.

  • Original article
  • Open access
  • Published: 08 July 2024

Can you spot the bot? Identifying AI-generated writing in college essays

  • Tal Waltzer   ORCID: orcid.org/0000-0003-4464-0336 1 ,
  • Celeste Pilegard 1 &
  • Gail D. Heyman 1  

International Journal for Educational Integrity volume  20 , Article number:  11 ( 2024 ) Cite this article

1 Altmetric

Metrics details

The release of ChatGPT in 2022 has generated extensive speculation about how Artificial Intelligence (AI) will impact the capacity of institutions for higher learning to achieve their central missions of promoting learning and certifying knowledge. Our main questions were whether people could identify AI-generated text and whether factors such as expertise or confidence would predict this ability. The present research provides empirical data to inform these speculations through an assessment given to a convenience sample of 140 college instructors and 145 college students (Study 1) as well as to ChatGPT itself (Study 2). The assessment was administered in an online survey and included an AI Identification Test which presented pairs of essays: In each case, one was written by a college student during an in-class exam and the other was generated by ChatGPT. Analyses with binomial tests and linear modeling suggested that the AI Identification Test was challenging: On average, instructors were able to guess which one was written by ChatGPT only 70% of the time (compared to 60% for students and 63% for ChatGPT). Neither experience with ChatGPT nor content expertise improved performance. Even people who were confident in their abilities struggled with the test. ChatGPT responses reflected much more confidence than human participants despite performing just as poorly. ChatGPT responses on an AI Attitude Assessment measure were similar to those reported by instructors and students except that ChatGPT rated several AI uses more favorably and indicated substantially more optimism about the positive educational benefits of AI. The findings highlight challenges for scholars and practitioners to consider as they navigate the integration of AI in education.

Introduction

Artificial intelligence (AI) is becoming ubiquitous in daily life. It has the potential to help solve many of society’s most complex and important problems, such as improving the detection, diagnosis, and treatment of chronic disease (Jiang et al. 2017 ), and informing public policy regarding climate change (Biswas 2023 ). However, AI also comes with potential pitfalls, such as threatening widely-held values like fairness and the right to privacy (Borenstein and Howard 2021 ; Weidinger et al. 2021 ; Zhuo et al. 2023 ). Although the specific ways in which the promises and pitfalls of AI will play out remain to be seen, it is clear that AI will change human societies in significant ways.

In late November of 2022, the generative large-language model ChatGPT (GPT-3, Brown et al. 2020 ) was released to the public. It soon became clear that talk about the consequences of AI was much more than futuristic speculation, and that we are now watching its consequences unfold before our eyes in real time. This is not only because the technology is now easily accessible to the general public, but also because of its advanced capacities, including a sophisticated ability to use context to generate appropriate responses to a wide range of prompts (Devlin et al. 2018 ; Gilson et al. 2022 ; Susnjak 2022 ; Vaswani et al. 2017 ).

How AI-generated content poses challenges for educational assessment

Since AI technologies like ChatGPT can flexibly produce human-like content, this raises the possibility that students may use the technology to complete their academic work for them, and that instructors may not be able to tell when their students turn in such AI-assisted work. This possibility has led some people to argue that we may be seeing the end of essay assignments in education (Mitchell 2022 ; Stokel-Walker 2022 ). Even some advocates of AI in the classroom have expressed concerns about its potential for undermining academic integrity (Cotton et al. 2023 ; Eke 2023 ). For example, as Kasneci et al. ( 2023 ) noted, the technology might “amplify laziness and counteract the learners’ interest to conduct their own investigations and come to their own conclusions or solutions” (p. 5). In response to these concerns, some educational institutions have already tried to ban ChatGPT (Johnson, 2023; Rosenzweig-Ziff 2023 ; Schulten, 2023).

These discussions are founded on extensive scholarship on academic integrity, which is fundamental to ethics in higher education (Bertram Gallant 2011 ; Bretag 2016 ; Rettinger and Bertram Gallant 2022 ). Challenges to academic integrity are not new: Students have long found and used tools to circumvent the work their teachers assign to them, and research on these behaviors spans nearly a century (Cizek 1999 ; Hartshorne and May 1928 ; McCabe et al. 2012 ). One recent example is contract cheating, where students pay other people to do their schoolwork for them, such as writing an essay (Bretag et al. 2019 ; Curtis and Clare 2017 ). While very few students (less than 5% by most estimates) tend to use contract cheating, AI has the potential to make cheating more accessible and affordable and it raises many new questions about the relationship between technology, academic integrity, and ethics in education (Cotton et al. 2023 ; Eke 2023 ; Susnjak 2022 ).

To date, there is very little empirical evidence to inform debates about the likely impact of ChatGPT on education or to inform what best practices might look like regarding use of the technology (Dwivedi et al. 2023 ; Lo 2023 ). The primary goal of the present research is to provide such evidence with reference to college-essay writing. One critical question is whether college students can pass off work generated by ChatGPT as their own. If so, large numbers of students may simply paste in ChatGPT responses to essays they are asked to write without the kind of active engagement with the material that leads to deep learning (Chi and Wylie 2014 ). This problem is likely to be exacerbated when students brag about doing this and earning high scores, which can encourage other students to follow suit. Indeed, this kind of bragging motivated the present work (when the last author learned about a college student bragging about using ChatGPT to write all of her final papers in her college classes and getting A’s on all of them).

In support of the possibility that instructors may have trouble identifying ChatGPT-generated test, some previous research suggests that ChatGPT is capable of successfully generating college- or graduate-school level writing. Yeadon et al. ( 2023 ) used AI to generate responses to essays based on a set of prompts used in a physics module that was in current use and asked graders to evaluate the responses. An example prompt they used was: “How did natural philosophers’ understanding of electricity change during the 18th and 19th centuries?” The researchers found that the AI-generated responses earned scores comparable to most students taking the module and concluded that current AI large-language models pose “a significant threat to the fidelity of short-form essays as an assessment method in Physics courses.” Terwiesch ( 2023 ) found that ChatGPT scored at a B or B- level on the final exam of Operations Management in an MBA program, and Katz et al. ( 2023 ) found that ChatGPT has the necessary legal knowledge, reading comprehension, and writing ability to pass the Bar exam in nearly all jurisdictions in the United States. This evidence makes it very clear that ChatGPT can generate well-written content in response to a wide range of prompts.

Distinguishing AI-generated from human-generated work

What is still not clear is how good instructors are at distinguishing between ChatGPT-generated writing and writing generated by students at the college level given that it is at least possible that ChatGPT-generated writing could be both high quality and be distinctly different than anything people generally write (e.g., because ChatGPT-generated writing has particular features). To our knowledge, this question has not yet been addressed, but a few prior studies have examined related questions. In the first such study, Gunser et al. ( 2021 ) used writing generated by a ChatGPT predecessor, GPT-2 (see Radford et al. 2019 ). They tested nine participants with a professional background in literature. These participants both generated content (i.e., wrote continuations after receiving the first few lines of unfamiliar poems or stories), and determined how other writing was generated. Gunser et al. ( 2021 ) found that misclassifications were relatively common. For example, in 18% of cases participants judged AI-assisted writing to be human-generated. This suggests that even AI technology that is substantially less advanced than ChatGPT is capable of generating writing that is hard to distinguish from human writing.

Köbis and Mossink ( 2021 ) also examined participants’ ability to distinguish between poetry written by GPT-2 and humans. Their participants were given pairs of poems. They were told that one poem in each pair was written by a human and the other was written by GPT-2, and they were asked to determine which was which. In one of their studies, the human-written poems were written by professional poets. The researchers generated multiple poems in response to prompts, and they found that when the comparison GPT-2 poems were ones they selected as the best among the set generated by the AI, participants could not distinguish between the GPT-2 and human writing. However, when researchers randomly selected poems generated by GPT-2, participants were better than chance at detecting which ones were generated by the AI.

In a third relevant study, Waltzer et al. ( 2023a ) tested high school teachers and students. All participants were presented with pairs of English essays, such as one on why literature matters. In each case one essay was written by a high school student and the other was generated by ChatGPT, and participants were asked which essay in each pair had been generated by ChatGPT. Waltzer et al. ( 2023a ) found that teachers only got it right 70% of the time, and that students’ performance was even worse (62%). They also found that well-written essays were harder to distinguish from those generated by ChatGPT than poorly written ones. However, it is unclear the extent to which these findings are specific to the high school context. It should also be noted that there were no clear right or wrong answers in the types of essays used in Waltzer et al. ( 2023a ), so the results may not generalize to essays that ask for factual information based on specific class content.

AI detection skills, attitudes, and perceptions

If college instructors find it challenging to distinguish between writing generated by ChatGPT and college students, it raises the question of what factors might be correlated with the ability to perform this discrimination. One possible correlate is experience with ChatGPT, which may allow people to recognize patterns in the writing style it generates, such as a tendency to formally summarize previous content. Content-relevant knowledge is another possible predictor. Individuals with such knowledge will presumably be better at spotting errors in answers, and it is plausible that instructors know that AI tools are likely to get content of introductory-level college courses correct and assume that essays that contain errors are written by students.

Another possible predictor is confidence about one’s ability to discriminate on the task or on particular items of the task (Erickson and Heit 2015 ; Fischer & Budesco, 2005 ; Wixted and Wells 2017 ). In other words, are AI discriminations made with a high degree of confidence more likely to be accurate than low-confidence discriminations? In some cases, confidence judgments are a good predictor of accuracy, such as on many perceptual decision tasks (e.g., detecting contrast between light and dark bars, Fleming et al. 2010 ). However, in other cases correlations between confidence and accuracy are small or non-existent, such as on some deductive reasoning tasks (e.g., Shynkaruk and Thompson 2006 ). Links to confidence can also depend on how confidence is measured: Gigerenzer et al. ( 1991 ) found overconfidence on individual items, but good calibration when participants were asked how many items they got right after seeing many items.

In addition to the importance of gathering empirical data on the extent to which instructors can distinguish ChatGPT from college student writing, it is important to examine how college instructors and students perceive AI in education given that such attitudes may affect behavior (Al Darayseh 2023 ; Chocarro et al. 2023 ; Joo et al. 2018 ; Tlili et al. 2023 ). For example, instructors may only try to develop precautions to prevent AI cheating if they view this as a significant concern. Similarly, students’ confusion about what counts as cheating can play an important role in their cheating decisions (Waltzer and Dahl 2023 ; Waltzer et al. 2023b ).

The present research

In the present research we developed an assessment that we gave to college instructors and students (Study 1) and ChatGPT itself (Study 2). The central feature of the assessment was an AI Identification Test , which included 6 pairs of essays. In each case (as was indicated in the instructions), one essay in each pair was generated by ChatGPT and the other was written by college students. The task was to determine which essay was written by the chatbot. The essay pairs were drawn from larger pools of essays of each type.

The student essays were written by students as part of a graded exam in a psychology class, and the ChatGPT essays were generated in response to the same essay prompts. Of interest was overall performance and to assess potential correlates of performance. Performance of college instructors was of particular interest because they are the ones typically responsible for grading, but performance of students and ChatGPT were also of interest for comparison. ChatGPT was also of interest given anecdotal evidence that college instructors are asking ChatGPT to tell them whether pieces of work were AI-generated. For example, the academic integrity office at one major university sent out an announcement asking instructors not to report students for cheating if their evidence was solely based on using ChatGPT to detect AI-generated writing (UCSD Academic Integrity Office, 2023 ).

We also administered an AI Attitude Assessment (Waltzer et al. 2023a ), which included questions about overall levels of optimism and pessimism about the use of AI in education, and the appropriateness of specific uses of AI in academic settings, such as a student submitting an edited version of a ChatGPT-generated essay for a writing assignment.

Study 1: College instructors and students

Participants were given an online assessment that included an AI Identification Test , an AI Attitude Assessment , and some demographic questions. The AI Identification Test was developed for the present research, as described below (see Materials and Procedure). The test involved presenting six pairs of essays, with the instructions to try to identify which one was written by ChatGPT in each case. Participants also rated their confidence before the task and after responding to each item, and reported how many they thought they got right at the end. The AI Attitude Assessment was drawn from Waltzer et al. ( 2023a ) to assess participants’ views of the use of AI in education.

Participants

For the testing phase of the project, we recruited 140 instructors who had taught or worked as a teaching assistant for classes at the college level (69 of them taught psychology and 63 taught other subjects such as philosophy, computer science, and history). We recruited instructors through personal connections and snowball sampling. Most of the instructors were women (59%), white (60%), and native English speakers (67%), and most of them taught at colleges in the United States (91%). We also recruited 145 undergraduate students ( M age = 20.90 years, 80% women, 52% Asian, 63% native English speakers) from a subject recruitment system in the psychology department at a large research university in the United States. All data collection took place between 3/15/2023 and 4/15/2023 and followed our pre-registration plan ( https://aspredicted.org/mk3a2.pdf ).

Materials and procedure

Developing the ai identification test.

To create the stimuli for the AI Identification Test, we first generated two prompts for the essays (Table  1 ). We chose these prompts in collaboration with an instructor to reflect real student assignments for a college psychology class.

Fifty undergraduate students hand-wrote both essays as part of a proctored exam in their psychology class on 1/30/2023. Research assistants transcribed the essays and removed essays from the pool that were not written in third-person or did not include the correct number of sentences. Three additional essays were excluded for being illegible, and another one was excluded for mentioning a specific location on campus. This led to 15 exclusions for the Phonemic Awareness prompt and 25 exclusions for the Studying Advice prompt. After applying these exclusions, we randomly selected 25 essays for each prompt to generate the 6 pairs given to each participant. To prepare the texts for use as stimuli, research assistants then used a word processor to correct obvious errors that could be corrected without major rewriting (e.g., punctuation, spelling, and capitalization).

All student essays were graded according to the class rubric on a scale from 0 to 10 by two individuals on the teaching team of the class: the course’s primary instructor and a graduate student teaching assistant. Grades were averaged together to create one combined grade for each essay (mean: 7.93, SD: 2.29, range: 2–10). Two of the authors also scored the student essays for writing quality on a scale from 0 to 100, including clarity, conciseness, and coherence (combined score mean: 82.83, SD : 7.53, range: 65–98). Materials for the study, including detailed scoring rubrics, are available at https://osf.io/2c54a/ .

The ChatGPT stimuli were prepared by entering the same prompts into ChatGPT ( https://chat.openai.com/ ) between 1/23/2023 and 1/25/2023, and re-generating the responses until there were 25 different essays for each prompt.

Testing Phase

In the participant testing phase, college instructors and students took the assessment, which lasted approximately 10 min. All participants began by indicating the name of their school and whether they were an instructor or a student, how familiar they were with ChatGPT (“Please rate how much experience you have with using ChatGPT”), and how confident they were that they would be able to distinguish between writing generated by ChatGPT and by college students. Then they were told they would get to see how well they score at the end, and they began the AI Identification Test.

The AI Identification Test consisted of six pairs of essays: three Phonemic Awareness pairs, and three Studying Advice pairs, in counterbalanced order. Each pair included one text generated by ChatGPT and one text generated by a college student, both drawn randomly from their respective pools of 25 possible essays. No essays were repeated for the same participant. Figure  1 illustrates what a text pair looked like in the survey.

figure 1

Example pair of essays for the Phonemic Awareness prompt. Top: student essay. Bottom: ChatGPT essay

For each pair, participants selected the essay they thought was generated by ChatGPT and indicated how confident they were about their choice (slider from 0 = “not at all confident” to 100 = “extremely confident”). After all six pairs, participants estimated how well they did (“How many of the text pairs do you think you answered correctly?”).

After completing the AI Identification task, participants completed the AI Attitude Assessment concerning their views of ChatGPT in educational contexts (see Waltzer et al. 2023a ). On this assessment, participants first estimated what percent of college students in the United States would ask ChatGPT to write an essay for them and submit it. Next, they rated their concerns (“How concerned are you about ChatGPT having negative effects on education?”) and optimism (“How optimistic are you about ChatGPT having positive benefits for education?”) about the technology on a scale from 0 (“not at all”) to 100 (“extremely”). On the final part of the AI Attitude Assessment, they evaluated five different possible uses of ChatGPT in education (such as submitting an essay after asking ChatGPT to improve the vocabulary) on a scale from − 10 (“really bad”) to + 10 (“really good”).

Participants also rated the extent to which they already knew the subject matter (i.e., cognitive psychology and the science of learning), and were given optional open-ended text boxes to share any experiences from their classes or suggestions for instructors related to the use of ChatGPT, or to comment on any of the questions in the Attitude Assessment. Instructors were also asked whether they had ever taught a psychology class and to describe their teaching experience. At the end, all participants reported demographic information (e.g., age, gender). All prompts are available in the online supplementary materials ( https://osf.io/2c54a/ ).

Data Analysis

We descriptively summarized variables of interest (e.g., overall accuracy on the Identification Test). We used inferential tests to predict Identification Test accuracy from group (instructor or student), confidence, subject expertise, and familiarity with ChatGPT. We also predicted responses to the AI Attitude Assessment as a function of group (instructor or student). All data analysis was done using R Statistical Software (v4.3.2; R Core Team 2021 ).

Key hypotheses were tested using Welch’s two-sample t-tests for group comparisons, linear regression models with F-tests for other predictors of accuracy, and Generalized Linear Mixed Models (GLMMs, Hox 2010 ) with likelihood ratio tests for within-subjects trial-by-trial analyses. GLMMs used random intercepts for participants and predicted trial performance (correct or incorrect) using trial confidence and essay quality as fixed effects.

Overall performance on AI identification test

Instructors correctly identified which essay was written by the chatbot 70% of the time, which was above chance (chance: 50%, binomial test: p  < .001, 95% CI: [66%, 73%]). Students also performed above chance, with an average score of 60% (binomial test: p  < .001, 95% CI: [57%, 64%]). Instructors performed significantly better than students (Welch’s two-sample t -test: t [283] = 3.30, p  = .001).

Familiarity With subject matter

Participants rated how much previous knowledge they had in the essay subject matter (i.e., cognitive psychology and the science of learning). Linear regression models with F- tests indicated that familiarity with the subject did not predict instructors’ or students’ accuracy, F s(1) < 0.49, p s > .486. Psychology instructors did not perform any better than non-psychology instructors, t (130) = 0.18, p  = .860.

Familiarity with ChatGPT

Nearly all participants (94%) said they had heard of ChatGPT before taking the survey, and most instructors (62%) and about half of students (50%) said they had used ChatGPT before. For both groups, participants who used ChatGPT did not perform any better than those who never used it before, F s(1) < 0.77, p s > .383. Instructors’ and students’ experience with ChatGPT (from 0 = not at all experienced to 100 = extremely experienced) also did not predict their performance, F s(1) < 0.77, p s > .383.

Confidence and estimated score

Before they began the Identification Test, both instructors and students expressed low confidence in their abilities to identify the chatbot ( M  = 34.60 on a scale from 0 = not at all confident to 100 = extremely confident). Their confidence was significantly below the midpoint of the scale (midpoint: 50), one-sample t -test: t (282) = 11.46, p  < .001, 95% CI: [31.95, 37.24]. Confidence ratings that were done before the AI Identification test did not predict performance for either group, Pearson’s r s < .12, p s > .171.

Right after they completed the Identification Test, participants guessed how many text pairs they got right. Both instructors and students significantly underestimated their performance by about 15%, 95% CI: [11%, 18%], t (279) = -8.42, p  < .001. Instructors’ estimated scores were positively correlated with their actual scores, Pearson’s r  = .20, t (135) = 2.42, p  = .017. Students’ estimated scores were not related to their actual scores, r  = .03, p  = .731.

Trial-by-trial performance on AI identification test

Participants’ confidence ratings on individual trials were counted as high if they fell above the midpoint (> 50 on a scale from 0 = not at all confident to 100 = extremely confident). For these within-subjects trial-by-trial analyses, we used Generalized Linear Mixed Models (GLMMs, Hox 2010 ) with random intercepts for participants and likelihood ratio tests (difference score reported as D ). Both instructors and students performed better on trials in which they expressed high confidence (instructors: 73%, students: 63%) compared to low confidence (instructors: 65%, students: 56%), D s(1) > 4.59, p s < .032.

Student essay quality

We used two measures to capture the quality of each student-written essay: its assigned grade from 0 to 10 based on the class rubric, and its writing quality score from 0 to 100. Assigned grade was weakly related to instructors’ accuracy, but not to students’ accuracy. The text pairs that instructors got right tended to include student essays that earned slightly lower grades ( M  = 7.89, SD  = 2.22) compared to those they got wrong ( M  = 8.17, SD  = 2.16), D (1) = 3.86, p  = .050. There was no difference for students, D (1) = 2.84, p  = .092. Writing quality score did not differ significantly between correct and incorrect trials for either group, D (1) = 2.12, p  = .146.

AI attitude assessment

Concerns and hopes about chatgpt.

Both instructors and students expressed intermediate levels of concern and optimism. Specifically, on a scale from 0 (“not at all”) to 100 (“extremely”), participants expressed intermediate concern about ChatGPT having negative effects on education ( M instructors = 59.82, M students = 55.97) and intermediate optimism about it having positive benefits ( M instructors = 49.86, M students = 54.08). Attitudes did not differ between instructors and students, t s < 1.43, p s > .154. Participants estimated that just over half of college students (instructors: 57%, students: 54%) would use ChatGPT to write an essay for them and submit it. These estimates also did not differ by group, t (278) = 0.90, p  = .370.

Evaluations of ChatGPT uses

Participants evaluated five different uses of ChatGPT in educational settings on a scale from − 10 (“really bad”) to + 10 (“really good”). Both instructors and students rated it very bad for someone to ask ChatGPT to write an essay for them and submit the direct output, but instructors rated it significantly more negatively (instructors: -8.95, students: -7.74), t (280) = 3.59, p  < .001. Attitudes did not differ between groups for any of the other scenarios (Table  2 ), t s < 1.31, p s > .130.

Exploratory analysis of demographic factors

We also conducted exploratory analyses looking at ChatGPT use and attitudes among different demographic groups (gender, race, and native English speakers). We combined instructors and students because their responses to the Attitude Assessment did not differ. In these exploratory analyses, we found that participants who were not native English speakers were more likely to report using ChatGPT and to view it more positively. Specifically, 69% of non-native English speakers had used ChatGPT before, versus 48% of native English speakers, D (1) = 12.00, p  < .001. Regardless of native language, the more experience someone had with ChatGPT, the more optimism they reported, F (1) = 18.71, p  < .001, r  = .37). Non-native speakers rated the scenario where a student writes an essay and asks ChatGPT to improve its vocabulary slightly positively (1.19) whereas native English speakers rated it slightly negatively (-1.43), F (1) = 11.00, p  = .001. Asian participants expressed higher optimism ( M  = 59.14) than non-Asian participants ( M  = 47.29), F (1) = 10.05, p  = .002. We found no other demographic differences.

Study 2: ChatGPT

Study 1 provided data on college instructors’ and students’ ability to recognize ChatGPT-generated writing and about their views of the technology. In Study 2, of primary interest was whether ChatGPT itself might perform better at identifying ChatGPT-generated writing. Indeed, the authors have heard discussions of this as a possible solution to recognize AI-generated writing. We addressed this question by repeatedly asking ChatGPT to act as a participant in the AI Identification Task. While doing so, we administered the rest of the assessment given to participants in Study 1. This included our AI Attitude Assessment, which allowed us to examine the extent to which ChatGPT produced attitude responses that were similar to those of the participants in Study 1.

Participants, materials, and procedures

There were no human participants for Study 2. We collected 40 survey responses from ChatGPT, each run in a separate session on the platform ( https://chat.openai.com/ ) between 5/4/2023 and 5/15/2023.

Two research assistants were trained on how to run the survey in the ChatGPT online interface. All prompts from the Study 1 survey were used, with minor modifications to suit the chat format. For example, slider questions were explained in the prompt, so instead of “How confident are you about this answer?” the prompt was “How confident are you about this answer from 0 (not at all confident) to 100 (extremely confident)?”. In pilot testing, we found that ChatGPT sometimes failed to answer the question (e.g., by not providing a number), so we prepared a second prompt for every question that the researcher used whenever the first prompt was not answered (e.g., “Please answer the above question with one number between 0 to 100.”). If ChatGPT still failed on the second prompt, the researcher marked it as a non-response and moved on to the next question in the survey.

Data analysis

Like Study 1, all analyses were done in R Statistical Software (R Core Team 2021 ). Key analyses first used linear regression models and F -tests to compare all three groups (instructors, students, ChatGPT). When these omnibus tests were significant, we followed up with post-hoc pairwise comparisons using Tukey’s method.

AI identification test

Overall accuracy.

ChatGPT generated correct responses on 63% of trials in the AI Identification Test, which was significantly above chance, binomial test p  < .001, 95% CI: [57%, 69%]. Pairwise comparisons found that this performance by ChatGPT was not any different from that of instructors or students, t s(322) < 1.50, p s > .292.

Confidence and estimated performance

Unlike the human participants, ChatGPT produced responses with very high confidence before the task generally ( m  = 71.38, median  = 70) and during individual trials specifically ( m  = 89.82, median  = 95). General confidence ratings before the test were significantly higher from ChatGPT than from the humans (instructors: 34.35, students: 34.83), t s(320) > 9.47, p s < .001. But, as with the human participants, this confidence did not predict performance on the subsequent Identification task, F (1) = 0.94, p  = .339. And like the human participants, ChatGPT’s reported confidence on individual trials did predict performance: ChatGPT produced higher confidence ratings on correct trials ( m  = 91.38) than incorrect trials ( m  = 87.33), D (1) = 8.74, p  = .003.

ChatGPT also produced responses indicating high confidence after the task, typically estimating that it got all six text pairs right ( M  = 91%, median  = 100%). It overestimated performance by about 28%, and a paired t -test confirmed that ChatGPT’s estimated performance was significantly higher than its actual performance, t (36) = 9.66, p  < .001. As inflated as it was, estimated performance still had a small positive correlation with actual performance, Pearson’s r  = .35, t (35) = 2.21, p  = .034.

Essay quality

The quality of the student essays as indexed by their grade and writing quality score did not significantly predict performance, D s < 1.97, p s > .161.

AI attitude Assessment

Concerns and hopes.

ChatGPT usually failed to answer the question, “How concerned are you about ChatGPT having negative effects on education?” from 0 (not at all concerned) to 100 (extremely concerned). Across the 40% of cases where ChatGPT successfully produced an answer, the average concern rating was 64.38, which did not differ significantly from instructors’ or students’ responses, F (2, 294) = 1.20, p  = .304. ChatGPT produced answers much more often for the question, “How optimistic are you about ChatGPT having positive benefits for education?”, answering 88% of the time. The average optimism rating produced by ChatGPT was 73.24, which was significantly higher than that of instructors (49.86) and students (54.08), t s > 4.33, p s < .001. ChatGPT only answered 55% of the time for the question about how many students would use ChatGPT to write an essay for them and submit it, typically generating explanations about its inability to predict human behavior and the fact that it does not condone cheating when it did not give an estimate. When it did provide an estimate ( m  = 10%), it was vastly lower than that of instructors (57%) and students (54%), t s > 7.84, p s < .001.

Evaluation of ChatGPT uses

ChatGPT produced ratings of the ChatGPT use scenarios that on average were rank-ordered the same as the human ratings, with direct copying rated the most negatively and generating practice problems rated the most positively (see Fig.  2 ).

figure 2

Average ratings of ChatGPT uses, from − 10 = really bad to + 10 = really good. Human responses included for comparison (instructors in dark gray and students in light gray bars)

Compared to humans’ ratings, ratings produced by ChatGPT were significantly more positive in most scenarios, t s > 3.09, p s < .006, with two exceptions. There was no significant difference between groups in the “format” scenario (using ChatGPT to format an essay in another style such as APA), F (2,318) = 2.46, p  = .087. And for the “direct” scenario, ChatGPT tended to rate direct copying more negatively than students ( t [319] = 4.08, p  < .001) but not instructors (t[319] = 1.57, p  = .261), perhaps because ratings from ChatGPT and instructors were already so close to the most negative possible rating.

In 1950, Alan Turing said he hoped that one day machines would be able to compete with people in all intellectual fields (Turing 1950 ; see Köbis and Mossink 2021 ). Today, by many measures, the large-language model, ChatGPT, appears to be getting close to achieving this end. In doing so, it is raising questions about the impact this AI and its successors will have on individuals and the institutions that shape the societies in which we live. One important set of questions revolves around its use in higher education, which is the focus of the present research.

Empirical contributions

Detecting ai-generated text.

Our central research question focused on whether instructors can identify ChatGPT-generated writing, since an inability to do so could threaten the ability of institutions of higher learning to promote learning and assess competence. To address this question, we developed an AI Identification Test in which the goal was to try to distinguish between psychology essays written by college students on exams versus essays generated by ChatGPT in response to the same prompts. We found that although college instructors performed substantially better than chance, they still found the assessment to be challenging, scoring an average of only 70%. This relatively poor performance suggests that college instructors have substantial difficulty detecting ChatGPT-generated writing. Interestingly, this performance by the college instructors was the same average performance as Waltzer et al. ( 2023a ) observed among high school instructors (70%) on a similar test involving English literature essays, suggesting the results are generalizable across the student populations and essay types. We also gave the assessment to college students (Study 1) and to ChatGPT (Study 2) for comparison. On average, students (60%) and ChatGPT (63%) performed even worse than instructors, although the difference only reached statistical significance when comparing students and instructors.

We found that instructors and students who went into the study believing they would be very good at distinguishing between essays written by college students versus essays generated by ChatGPT were in fact no better at doing so than participants who lacked such confidence. However, we did find that item-level confidence did predict performance: when participants rated their confidence after each specific pair (i.e., “How confident are you about this answer?”), they did perform significantly better on items they reported higher confidence on. These same patterns were observed when analyzing the confidence ratings from ChatGPT, though ChatGPT produced much higher confidence ratings than instructors or students, reporting overconfidence while instructors and students reported underconfidence.

Attitudes toward AI in education

Instructors and students both thought it was very bad for students to turn in an assignment generated by ChatGPT as their own, and these ratings were especially negative for instructors. Overall, instructors and students looked similar to one another in their evaluations of other uses of ChatGPT in education. For example, both rated submitting an edited version of a ChatGPT-generated essay in a class as bad, but less bad than submitting an unedited version. Interestingly, the rank orderings in evaluations of ChatGPT uses were the same when the responses were generated by ChatGPT as when they were generated by instructors or students. However, ChatGPT produced more favorable ratings of several uses compared to instructors and students (e.g., using the AI tool to enhance the vocabulary in an essay). Overall, both instructors and students reported being about as optimistic as they were concerned about AI in education. Interestingly, ChatGPT produced responses indicative of much more optimism than both human groups of participants.

Many instructors commented on the challenges ChatGPT poses for educators. One noted that “… ChatGPT makes it harder for us to rely on homework assignments to help students to learn. It will also likely be much harder to rely on grading to signal how likely it is for a student to be good at a skill or how creative they are.” Some suggested possible solutions such as coupling writing with oral exams. Others suggested that they would appreciate guidance. For example, one said, “I have told students not to use it, but I feel like I should not be like that. I think some of my reluctance to allow usage comes from not having good guidelines.”

And like the instructors, some students also suggested that they want guidance, such as knowing whether using ChatGPT to convert a document to MLA format would count as a violation of academic integrity. They also highlighted many of the same problems as instructors and noted beneficial ways students are finding to use it. One student noted that, “I think ChatGPT definitely has the potential to be abused in an educational setting, but I think at its core it can be a very useful tool for students. For example, I’ve heard of one student giving ChatGPT a rubric for an assignment and asking it to grade their own essay based on the rubric in order to improve their writing on their own.”

Theoretical contributions and practical implications

Our findings underscore the fact that AI chatbots have the potential to produce confident-sounding responses that are misleading (Chen et al. 2023 ; Goodwins 2022 ; Salvi et al. 2024 ). Interestingly, the underconfidence reported by instructors and students stands in contrast to some findings that people often expressed overconfidence in their abilities to detect AI (e.g., deepfake videos, Köbis et al. 2021 ). Although general confidence before the task did not predict performance, specific confidence on each item of the task did predict performance. Taken together, our findings are consistent with other work suggesting confidence effects are context-dependent and can differ depending on whether they are assessed at the item level or more generally (Gigerenzer et al. 1991 ).

The fact that college instructors have substantial difficulty differentiating between ChatGPT-generated writing and the writing of college students provides evidence that ChatGPT poses a significant threat to academic integrity. Ignoring this threat is also likely to undermine central aspects of the mission of higher education in ways that undermine the value of assessments and disincentivize the kinds of cognitive engagement that promote deep learning (Chi and Wylie 2014 ). We are skeptical of answers that point to the use of AI detection tools to address this issue given that they will always be imperfect and false accusations have potential to cause serious harm (Dalalah and Dalalah 2023 ; Fowler 2023 ; Svrluga, 2023 ). Rather, we think that the solution will have to involve developing and disseminating best practices regarding creating assessments and incentivizing cognitive engagement in ways that help students learn to use AI as problem-solving tools.

Limitations and future directions

Why instructors perform better than students at detecting AI-generated text is unclear. Although we did not find any effect of content-relevant expertise, it still may be the case that experience with evaluating student writing matters, and instructors presumably have more such experience. For example, one non-psychology instructor who got 100% of the pairs correct said, “Experience with grading lower division undergraduate papers indicates that students do not always fully answer the prompt, if the example text did not appear to meet all of the requirements of the prompt or did not provide sufficient information, I tended to assume an actual student wrote it.” To address this possibility, it will be important to compare adults who do have teaching experience with those who do not.

It is somewhat surprising that experience with ChatGPT did not affect the performance of instructors or students on the AI Identification Test. One contributing factor may be that people pick up on some false heuristics from reading the text it generates (see Jakesch et al. 2023 ). It is possible that giving people practice at distinguishing the different forms of writing with feedback could lead to better performance.

Why confidence was predictive of accuracy at the item level is still not clear. One possibility is that there are some specific and valid cues many people were using. One likely cue is grammar. We revised grammar errors in student essays that were picked up by a standard spell checker in which the corrections were obvious. However, we left ungrammatical writing that didn’t have obvious corrections (e.g., “That is being said, to be able to understand the concepts and materials being learned, and be able to produce comprehension.“). Many instructors noted that they used grammatical errors as cues that writing was generated by students. As one instructor remarked, “Undergraduates often have slight errors in grammar and tense or plurality agreement, and I have heard the chat bot works very well as an editor.” Similarly, another noted, “I looked for more complete, grammatical sentences. In my experience, Chat-GPT doesn’t use fragment sentences and is grammatically correct. Students are more likely to use incomplete sentences or have grammatical errors.” This raises methodological questions about what is the best comparison between AI and human writing. For example, it is unclear which grammatical mistakes should be corrected in student writing. Also of interest will be to examine the detectability of writing that is generated by AI and later edited by students, since many students will undoubtedly use AI in this way to complete their course assignments.

We also found that student-written essays that earned higher grades (based on the scoring rubric for their class exam) were harder for instructors to differentiate from ChatGPT writing. This does not appear to be a simple effect of writing quality given that a separate measure of writing quality that did not account for content accuracy was not predictive. According to the class instructor, the higher-scoring essays tended to include more specific details, and this might have been what made them less distinguishable. Relatedly, it may be that the higher-scoring essays were harder to distinguish because they appeared to be generated by more competent-sounding writers, and it was clear from instructor comments that they generally viewed ChatGPT as highly competent.

The results of the present research validate concerns that have been raised about college instructors having difficulty distinguishing writing generated by ChatGPT from the writing of their students, and document that this is also true when students try to detect writing generated by ChatGPT. The results indicate that this issue is particularly pronounced when instructors evaluate high-scoring student essays. The results also indicate that ChatGPT itself performs no better than instructors at detecting ChatGPT-generated writing even though ChatGPT-reported confidence is much higher. These findings highlight the importance of examining current teaching and assessment practices and the potential challenges AI chatbots pose for academic integrity and ethics in education (Cotton et al. 2023 ; Eke 2023 ; Susnjak 2022 ). Further, the results show that both instructors and students have a mixture of apprehension and optimism about the use of AI in education, and that many are looking for guidance about how to ethically use it in ways that promote learning. Taken together, our findings underscore some of the challenges that need to be carefully navigated in order to minimize the risks and maximize the benefits of AI in education.

Data availability

Supplementary materials, including data, analysis, and survey items, are available on the Open Science Framework: https://osf.io/2c54a/ .

Abbreviations

Artificial Intelligence

Confidence Interval

Generalized Linear Mixed Model

Generative Pre-trained Transformer

Standard Deviation

Al Darayseh A (2023) Acceptance of artificial intelligence in teaching science: Science teachers’ perspective. Computers Education: Artif Intell 4:100132. https://doi.org/10.1016/j.caeai.2023.100132

Article   Google Scholar  

Bertram Gallant T (2011) Creating the ethical academy. Routledge, New York

Book   Google Scholar  

Biswas SS (2023) Potential use of Chat GPT in global warming. Ann Biomed Eng 51:1126–1127. https://doi.org/10.1007/s10439-023-03171-8

Borenstein J, Howard A (2021) Emerging challenges in AI and the need for AI ethics education. AI Ethics 1:61–65. https://doi.org/10.1007/s43681-020-00002-7

Bretag T (ed) (2016) Handbook of academic integrity. Springer

Bretag T, Harper R, Burton M, Ellis C, Newton P, Rozenberg P, van Haeringen K (2019) Contract cheating: a survey of Australian university students. Stud High Educ 44(11):1837–1856. https://doi.org/10.1080/03075079.2018.1462788

Brown TB, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, Agarwal S, Herbert-Voss A, Krueger G, Henighan T, Child R, Ramesh A, Ziegler D, Wu J, Winter C, Amodei D (2020) Language models are few-shot learners. Adv Neural Inf Process Syst 33. https://doi.org/10.48550/arxiv.2005.14165

Chen Y, Andiappan M, Jenkin T, Ovchinnikov A (2023) A manager and an AI walk into a bar: does ChatGPT make biased decisions like we do? SSRN 4380365. https://doi.org/10.2139/ssrn.4380365

Chi MTH, Wylie R (2014) The ICAP framework: linking cognitive engagement to active learning outcomes. Educational Psychol 49(4):219–243. https://doi.org/10.1080/00461520.2014.965823

Chocarro R, Cortiñas M, Marcos-Matás G (2023) Teachers’ attitudes towards chatbots in education: a technology acceptance model approach considering the effect of social language, bot proactiveness, and users’ characteristics. Educational Stud 49(2):295–313. https://doi.org/10.1080/03055698.2020.1850426

Cizek GJ (1999) Cheating on tests: how to do it, detect it, and prevent it. Routledge

R Core Team (2021) R: A language and environment for statistical computing R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/

Cotton DRE, Cotton PA, Shipway JR (2023) Chatting and cheating: ensuring academic integrity in the era of ChatGPT. Innovations Educ Teach Int. https://doi.org/10.1080/14703297.2023.2190148

Curtis GJ, Clare J (2017) How prevalent is contract cheating and to what extent are students repeat offenders? J Acad Ethics 15:115–124. https://doi.org/10.1007/s10805-017-9278-x

Dalalah D, Dalalah OMA (2023) The false positives and false negatives of generative AI detection tools in education and academic research: the case of ChatGPT. Int J Manage Educ 21(2):100822. https://doi.org/10.1016/j.ijme.2023.100822

Devlin J, Chang M-W, Lee K, Toutanova K (2018) BERT: pre-training of deep bidirectional transformers for language understanding. ArXiv. https://doi.org/10.48550/arxiv.1810.04805

Dwivedi YK, Kshetri N, Hughes L, Slade EL, Jeyaraj A, Kar AK, Baabdullah AM, Koohang A, Raghavan V, Ahuja M, Albanna H, Albashrawi MA, Al-Busaidi AS, Balakrishnan J, Barlette Y, Basu S, Bose I, Brooks L, Buhalis D, Wright R (2023) So what if ChatGPT wrote it? Multidisciplinary perspectives on opportunities, challenges, and implications of generative conversational AI for research, practice, and policy. Int J Inf Manag 71:102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642

Eke DO (2023) ChatGPT and the rise of generative AI: threat to academic integrity? J Responsible Technol 13:100060. https://doi.org/10.1016/j.jrt.2023.100060

Erickson S, Heit E (2015) Metacognition and confidence: comparing math to other academic subjects. Front Psychol 6:742. https://doi.org/10.3389/fpsyg.2015.00742

Fischer I, Budescu DV (2005) When do those who know more also know more about how much they know? The development of confidence and performance in categorical decision tasks. Organ Behav Hum Decis Process 98:39–53. https://doi.org/10.1016/j.obhdp.2005.04.003

Fleming SM, Weil RS, Nagy Z, Dolan RJ, Rees G (2010) Relating introspective accuracy to individual differences in brain structure. Science 329:1541–1543. https://doi.org/10.1126/science.1191883

Fowler GA (2023), April 14 We tested a new ChatGPT-detector for teachers. It flagged an innocent student. The Washington Post . https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin/

Gigerenzer G (1991) From tools to theories: a heuristic of discovery in cognitive psychology. Psychol Rev 98:254. https://doi.org/10.1037/0033-295X.98.2.254

Gigerenzer G, Hoffrage U, Kleinbölting H (1991) Probabilistic mental models: a brunswikian theory of confidence. Psychol Rev 98(4):506–528. https://doi.org/10.1037/0033-295X.98.4.506

Gilson A, Safranek C, Huang T, Socrates V, Chi L, Taylor RA, Chartash D (2022) How well does ChatGPT do when taking the medical licensing exams? The implications of large language models for medical education and knowledge assessment. MedRxiv. https://doi.org/10.1101/2022.12.23.22283901

Goodwins T (2022), December 12 ChatGPT has mastered the confidence trick, and that’s a terrible look for AI. The Register . https://www.theregister.com/2022/12/12/chatgpt_has_mastered_the_confidence/

Gunser VE, Gottschling S, Brucker B, Richter S, Gerjets P (2021) Can users distinguish narrative texts written by an artificial intelligence writing tool from purely human text? In C. Stephanidis, M. Antona, & S. Ntoa (Eds.), HCI International 2021 , Communications in Computer and Information Science , (Vol. 1419, pp. 520–527). Springer. https://doi.org/10.1007/978-3-030-78635-9_67

Hartshorne H, May MA (1928) Studies in the nature of character: vol. I. studies in deceit. Macmillan, New York

Google Scholar  

Hox J (2010) Multilevel analysis: techniques and applications, 2nd edn. Routledge, New York, NY

Jakesch M, Hancock JT, Naaman M (2023) Human heuristics for AI-generated language are flawed. Proceedings of the National Academy of Sciences, 120 (11), e2208839120. https://doi.org/10.1073/pnas.2208839120

Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, Wang Y, Dong Q, Shen H, Wang Y (2017) Artificial intelligence in healthcare: past, present and future. Stroke Vascular Neurol 2(4):230–243. https://doi.org/10.1136/svn-2017-000101

Joo YJ, Park S, Lim E (2018) Factors influencing preservice teachers’ intention to use technology: TPACK, teacher self-efficacy, and technology acceptance model. J Educational Technol Soc 21(3):48–59. https://www.jstor.org/stable/26458506

Kasneci E, Seßler K, Küchemann S, Bannert M, Dementieva D, Fischer F, Kasneci G (2023) ChatGPT for good? On opportunities and challenges of large language models for education. Learn Individual Differences 103:102274. https://doi.org/10.1016/j.lindif.2023.102274

Katz DM, Bommarito MJ, Gao S, Arredondo P (2023) GPT-4 passes the bar exam. SSRN Electron J. https://doi.org/10.2139/ssrn.4389233

Köbis N, Mossink LD (2021) Artificial intelligence versus Maya Angelou: experimental evidence that people cannot differentiate AI-generated from human-written poetry. Comput Hum Behav 114:106553. https://doi.org/10.1016/j.chb.2020.106553

Köbis NC, Doležalová B, Soraperra I (2021) Fooled twice: people cannot detect deepfakes but think they can. iScience 24(11):103364. https://doi.org/10.1016/j.isci.2021.103364

Lo CK (2023) What is the impact of ChatGPT on education? A rapid review of the literature. Educ Sci 13(4):410. https://doi.org/10.3390/educsci13040410

McCabe DL, Butterfield KD, Treviño LK (2012) Cheating in college: why students do it and what educators can do about it. Johns Hopkins, Baltimore, MD

Mitchell A (2022) December 26). Professor catches student cheating with ChatGPT: ‘I feel abject terror’. New York Post. https://nypost.com/2022/12/26/students-using-chatgpt-to-cheat-professor-warns

Radford A, Wu J, Child R, Luan D, Amodei D, Sutskever I (2019) Language models are unsupervised multitask learners. OpenAI https://openai.com/research/better-language-models

Rettinger DA, Bertram Gallant T (eds) (2022) Cheating academic integrity: lessons from 30 years of research. Jossey Bass

Rosenzweig-Ziff D (2023) New York City blocks use of the ChatGPT bot in its schools. Wash Post https://www.washingtonpost.com/education/2023/01/05/nyc-schools-ban-chatgpt/

Salvi F, Ribeiro MH, Gallotti R, West R (2024) On the conversational persuasiveness of large language models: a randomized controlled trial. ArXiv. https://doi.org/10.48550/arXiv.2403.14380

Shynkaruk JM, Thompson VA (2006) Confidence and accuracy in deductive reasoning. Mem Cognit 34(3):619–632. https://doi.org/10.3758/BF03193584

Stokel-Walker C (2022) AI bot ChatGPT writes smart essays — should professors worry? Nature. https://doi.org/10.1038/d41586-022-04397-7

Susnjak T (2022) ChatGPT: The end of online exam integrity? ArXiv . https://arxiv.org/abs/2212.09292

Svrluga S (2023) Princeton student builds app to detect essays written by a popular AI bot. Wash Post https://www.washingtonpost.com/education/2023/01/12/gptzero-chatgpt-detector-ai/

Terwiesch C (2023) Would Chat GPT3 get a Wharton MBA? A prediction based on its performance in the Operations Management course. Mack Institute for Innovation Management at the Wharton School , University of Pennsylvania. https://mackinstitute.wharton.upenn.edu/wp-content/uploads/2023/01/Christian-Terwiesch-Chat-GTP-1.24.pdf

Tlili A, Shehata B, Adarkwah MA, Bozkurt A, Hickey DT, Huang R, Agyemang B (2023) What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learn Environ 10:15. https://doi.org/10.1186/s40561-023-00237-x

Turing AM (1950) Computing machinery and intelligence. Mind - Q Rev Psychol Philos 236:433–460

UCSD Academic Integrity Office (2023) GenAI, cheating and reporting to the AI office [Announcement]. https://adminrecords.ucsd.edu/Notices/2023/2023-5-17-1.html

Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Polosukhin I (2017) Attention is all you need. Adv Neural Inf Process Syst 30. https://doi.org/10.48550/arxiv.1706.03762

Waltzer T, Dahl A (2023) Why do students cheat? Perceptions, evaluations, and motivations. Ethics Behav 33(2):130–150. https://doi.org/10.1080/10508422.2022.2026775

Waltzer T, Cox RL, Heyman GD (2023a) Testing the ability of teachers and students to differentiate between essays generated by ChatGPT and high school students. Hum Behav Emerg Technol 2023:1923981. https://doi.org/10.1155/2023/1923981

Waltzer T, DeBernardi FC, Dahl A (2023b) Student and teacher views on cheating in high school: perceptions, evaluations, and decisions. J Res Adolescence 33(1):108–126. https://doi.org/10.1111/jora.12784

Weidinger L, Mellor J, Rauh M, Griffin C, Uesato J, Huang PS, Gabriel I (2021) Ethical and social risks of harm from language models. ArXiv. https://doi.org/10.48550/arxiv.2112.04359

Wixted JT, Wells GL (2017) The relationship between eyewitness confidence and identification accuracy: a new synthesis. Psychol Sci Public Interest 18(1):10–65. https://doi.org/10.1177/1529100616686966

Yeadon W, Inyang OO, Mizouri A, Peach A, Testrow C (2023) The death of the short-form physics essay in the coming AI revolution. Phys Educ 58:035027. https://doi.org/10.1088/1361-6552/acc5cf

Zhuo TY, Huang Y, Chen C, Xing Z (2023) Red teaming ChatGPT via jailbreaking: bias, robustness, reliability and toxicity. ArXiv. https://doi.org/10.48550/arxiv.2301.12867

Download references

Acknowledgements

We thank Daniel Chen and Riley L. Cox for assistance with study design, stimulus preparation, and pilot testing. We also thank Emma C. Miller for grading the essays and Brian J. Compton for comments on the manuscript.

This work was partly supported by a National Science Foundation Postdoctoral Fellowship for T. Waltzer (NSF SPRF-FR# 2104610).

Author information

Authors and affiliations.

Department of Psychology, University of California San Diego, 9500 Gilman Drive, La Jolla, San Diego, CA, 92093-0109, USA

Tal Waltzer, Celeste Pilegard & Gail D. Heyman

You can also search for this author in PubMed   Google Scholar

Contributions

All authors collaborated in the conceptualization and design of the research. C. Pilegard facilitated recruitment and coding for real class assignments used in the study. T. Waltzer led data collection and analysis. G. Heyman and T. Waltzer wrote and revised the manuscript.

Corresponding author

Correspondence to Tal Waltzer .

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Waltzer, T., Pilegard, C. & Heyman, G.D. Can you spot the bot? Identifying AI-generated writing in college essays. Int J Educ Integr 20 , 11 (2024). https://doi.org/10.1007/s40979-024-00158-3

Download citation

Received : 16 February 2024

Accepted : 11 June 2024

Published : 08 July 2024

DOI : https://doi.org/10.1007/s40979-024-00158-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence
  • Academic integrity
  • Higher education

International Journal for Educational Integrity

ISSN: 1833-2595

sat writing test essay

Beta This is a new service – your feedback will help us to improve it.

Key stage 2 attainment: National headlines

Introduction.

This publication provides the latest headline statistics on attainment in key stage 2 national curriculum assessments in England. 

These statistics cover attainment in the following assessments taken by pupils at the end of year 6, when most are age 11:

  • Reading test
  • Grammar, punctuation and spelling test
  • Writing teacher assessment
  • Science teacher assessment

Attainment in 2024 is compared to 2023 and previous years where possible. There were no assessments in 2020 and 2021.

Headline facts and figures - 2023/24

Explore data and files used in this release, view or create your own tables.

View tables that we have built for you, or create your own tables from open data using our table tool

Data catalogue

Browse and download open data files from this release in our data catalogue

Data guidance

Learn more about the data files used in this release using our online guidance

Download all data (ZIP)

Download all data available in this release as a compressed ZIP file

Attainment in reading, writing and maths (combined)

In 2024, 61% of pupils reached the expected standard in all of reading, writing and maths, up from 60% in 2023. This is below 2019 attainment, where 65% of pupils met the standard. Attainment in all of reading, writing and maths is not directly comparable to some earlier years (2016 and 2017) because of changes to writing teacher assessment frameworks in 2018.

Data is not available for 2020 and 2021 as assessments were cancelled in these years due to the COVID-19 pandemic.

Attainment in individual subjects

In reading , 74% of pupils reached the expected standard in 2024, up from 73% in 2023. This figure has fluctuated between 72% and 75% since 2017. 

In writing teacher assessment, 72% of pupils reached the expected standard in 2024, up from 71% in 2023. Before the pandemic, in both 2018 and 2019, this figure was 78%.  Attainment in writing is not directly comparable to some earlier years (2016 and 2017) because of changes to writing teacher assessment frameworks in 2018. 

In maths , 73% of pupils reached the expected standard, unchanged since 2023. Before the pandemic, this figure increased from 70% to 79% between 2016 and 2019. 

Attainment amongst reading, writing and maths was lowest in writing, as in 2023. Before the pandemic, with the exception of 2018 where it was the same as maths, attainment amongst these three subjects was lowest in reading. 

In grammar, punctuation and spelling , 72% of pupils reached the expected standard in 2024. This remains unchanged since 2022, where it was the lowest figure since new assessments were introduced in 2016.  

In science teacher assessment, 81% of pupils reached the expected standard in 2023, up from 80% in 2023. Before the pandemic in 2019, this figure was 83%.  Attainment in science is not directly comparable to some earlier years (2016, 2017 and 2018) because of changes to science teacher assessment frameworks in 2019. 

Average scaled scores in reading, maths, and grammar, punctuation and spelling

We use scaled scores (opens in a new tab) to report the results of tests so we can make accurate comparisons of performance over time. Scaled scores range from 80 to 120. The total number of marks a pupil achieves in each test subject (raw score) is converted into a scaled score to ensure accurate comparisons can be made over time, even if the difficulty of the test itself varies. 

The average scaled scores in reading, maths, and grammar, punctuation and spelling tests have remained the same as 2023. 

In reading , the average scaled score is 105, unchanged since 2022. 

In maths , the average scaled score is 104, unchanged since 2022.

In grammar, punctuation and spelling , the average scaled score is 105, unchanged since 2022. 

The average scaled score is the mean scaled score of all pupils awarded a scaled score. It only includes pupils who took the test and achieved a scaled score. It gives us a measure of the typical performance of a pupil taking the tests. It is affected by the performance of pupils at all points in the range of scores. By contrast, the percentage of pupils achieving the expected standard focuses on the proportion of pupils above or below one particular score (100). As a consequence, changes in one measure may not be matched by changes in the other measure of the same size and direction.

About these statistics

This publication provides headline statistics for attainment in key stage 2 national curriculum assessments for pupils in schools in England. It provides key figures at national level to help schools and parents put results in context.

1. The expected standard

Key stage 2 assessments tell us if pupils have met the expected standard in five subjects by the end of primary school:

  • grammar, punctuation and spelling 

Tests are used to assess pupils in reading, maths and grammar, punctuation and spelling. Teacher assessment is used to assess pupils in writing and science. In addition to the individual subjects we report on pupils who meet the expected standard in reading, writing and maths combined. 

In the tests, pupils meet the expected standard if they achieve a scaled score of 100 or more. The test frameworks (opens in a new tab) provide performance descriptors for the typical characteristics of pupils working at the expected standard.

The teacher assessment frameworks (opens in a new tab) include ‘pupil can’ statements. For example, ‘the pupil can maintain legibility in joined handwriting when writing at speed’. To meet the expected standard, the teacher must judge there to be evidence that the pupil can meet all of the relevant statements.

DfE raised the expected standard in 2016 (opens in a new tab) , following the introduction of a new, more challenging national curriculum in 2014.

Pupils not meeting the expected standard

It is incorrect to say that pupils who have not met the expected standard in reading cannot read, or that those who have not met the expected standard in writing cannot write, and so on.

There is a spectrum of attainment among pupils who do not meet the expected standard, with some coming close and others further away.

A pupil who achieves below the expected standard will still be able to read. For example, they may be able to retrieve simple information from a text but be unable to make developed inferences about what they have read.

We also classify pupils as not meeting the expected standard when it has not been possible to assess their ability, for example, because of absence. This is the case for less than 1% of pupils.

2. Technical information

National curriculum assessment figures published here are based on test and teacher assessment data provided to the Department by the Standards and Testing Agency on 6 July 2024. 

This data contained all available marked key stage 2 tests and teacher assessments:

  • Reading test: 99.9%
  • Maths test: 99.9%
  • Grammar, punctuation and spelling test: 99.9%
  • Writing teacher assessment: 99.5%
  • Science teacher assessment: 99.5%

See the methodology for further detail.

Further information will be available

Further provisional statistics will be published on 10 September 2024 in the 'Key stage 2 attainment (provisional)’ publication. 

Revised figures will be published in the 'Key stage 2 attainment (revised)’ publication in December 2024. 

1. National level figures broken down by pupil and school characteristics

National level data with pupil characteristics breakdowns, including data broken down by gender, ethnicity, month of birth, free school meal eligibility, special educational needs provision, disadvantage and the disadvantage gap index, will be published in the provisional publication on 10 September.

School characteristics breakdowns, including school type, phase, cohort size and religious character, will also be published on 10 September.

2. Regional, local authority and local authority district level figures

Regional, local authority and local authority district level data - including data broken down by gender - will be published in the provisional publication on 10 September.

Regional, local authority and local authority district level data with further pupil characteristics breakdowns will be published in the revised publication in December.

3. Progress measures

Progress measures will not be published for the 2023/24 and 2024/25 academic years as KS2 pupils in these years did not have KS1 assessments due to the COVID-19 pandemic.

4. School level figures

School level data will be published on the Find School and College Performance data website in December.

Help and support

Methodology.

Find out how and why we collect, process and publish these statistics.

  • Key stage 2 attainment

Accredited official statistics

These accredited official statistics have been independently reviewed by the Office for Statistics Regulation (OSR). They comply with the standards of trustworthiness, quality and value in the Code of Practice for Statistics . Accredited official statistics are called National Statistics in the Statistics and Registration Service Act 2007 .

Accreditation signifies their compliance with the authority's Code of Practice for Statistics which broadly means these statistics are:

  • managed impartially and objectively in the public interest
  • meet identified user needs
  • produced according to sound methods
  • well explained and readily accessible

Our statistical practice is regulated by the Office for Statistics Regulation (OSR).

OSR sets the standards of trustworthiness, quality and value in the Code of Practice for Statistics that all producers of official statistics should adhere to.

You are welcome to contact us directly with any comments about how we meet these standards. Alternatively, you can contact OSR by emailing [email protected] or via the OSR website .

If you have a specific enquiry about Key stage 2 attainment: National headlines statistics and data:

Primary Attainment Statistics

Press office.

If you have a media enquiry:

Telephone: 020 7783 8300

Public enquiries

If you have a general enquiry about the Department for Education (DfE) or education:

Telephone: 037 0000 2288

Opening times: Monday to Friday from 9.30am to 5pm (excluding bank holidays)

PrepScholar

Choose Your Test

  • Search Blogs By Category
  • College Admissions
  • AP and IB Exams
  • GPA and Coursework

Should I Take the SAT Essay? How to Decide

author image

New SAT , SAT Essay

feature_writingessay.jpg

The SAT underwent some major revisions in 2016, and one of the biggest changes is that its previously required essay is now optional. This can be confusing for some students and parents. Should you take the essay? Will colleges require the essay or not? Will taking the essay make your application stronger?

Read on for answers to all these questions. This guide will explain what the SAT essay is, what the pros and cons of taking it are, and how you can make the best choice for you.

UPDATE: SAT Essay No Longer Offered

(adsbygoogle = window.adsbygoogle || []).push({});.

In January 2021, the College Board announced that after June 2021, it would no longer offer the Essay portion of the SAT (except at schools who opt in during School Day Testing). It is now no longer possible to take the SAT Essay, unless your school is one of the small number who choose to offer it during SAT School Day Testing.

While most colleges had already made SAT Essay scores optional, this move by the College Board means no colleges now require the SAT Essay. It will also likely lead to additional college application changes such not looking at essay scores at all for the SAT or ACT, as well as potentially requiring additional writing samples for placement.

What does the end of the SAT Essay mean for your college applications? Check out our article on the College Board's SAT Essay decision for everything you need to know.

What Is the SAT Essay?

The SAT essay is one of the sections of the SAT. After being required since its inception, the College Board has now decided to make the essay optional. This is similar to the ACT, whose essay has always been optional.

During this section, students will be given 50 minutes to write an essay. The essay for the new SAT is very different than it was for the previous version of the SAT. You can read all about the changes to the SAT here , but, as a brief overview, the essay will give you a passage by an author who is taking a stance on an issue. Your job will be to analyze how the author built that argument.

If you choose to take the essay, it will be its own section of the SAT, and the score you get on the essay will be separate from your score on the rest of the exam. Your main SAT score will be out of 1600 while your essay will be graded across three different categories: Reading, Analysis, and Writing. For each area, your essay will be given a score from 2-8.

Below is a sample prompt from one of the official practice tests released by the College Board. Here you can read the entire prompt, including the passages you would need to analyze.

body_sampleessayprompt.jpg

Do Colleges Require the SAT Essay Now That It's Optional?

So, the College Board has now made the essay an optional part of the SAT, but does that change how colleges view the essay (or if they even view it at all)? Kind of. Some schools that used the essays before no longer require them now that both the ACT and SAT have made the essays optional, but other schools continue to require the SAT essay.

Each school makes this decision individually, so there are no patterns to follow to try and guess who will require the essay and who won’t. Even top schools like the Ivy League are divided on whether to require the essay or not.  

This can make things confusing if you’re applying to college soon and don’t know if you should take the SAT essay or not. The following sections of this guide will explain the benefits and drawbacks of taking the essay and walk you through different scenarios so you can make an informed decision.

The #1 Consideration: Do Any of the Schools You're Interested in Require the Essay?

The absolute most important factor, the factor that matters more than anything else in the rest of this guide, is if any of the schools you’re applying to or thinking of applying to require the SAT essay.

The best way to get this information is to  Google “[school name] SAT essay requirement,” look directly on each school’s admission webpage, or   check out our list of the schools that require the SAT essay.

Find this information for every school you plan on applying to, even schools you’re not sure you want to apply to, but are considering. If even one school you’re interested in requires the SAT essay, then you should take it, regardless of any other factors.  There is no way to take just the SAT essay by itself, so if you take the SAT without the essay and then, later on, realize you need an essay score for a school you’re applying to, you will have to retake the entire test.

So, if a school you’re interested in requires the SAT essay, your choice is clear: take the essay when you take the SAT. However, what if the schools you’re interested in don’t require the essay? If that’s the case, you have some other factors to consider. Read on!

Benefits of Taking the SAT Essay

If none of the schools you’re thinking of applying to require the SAT essay, why would you want to take it? The two main reasons are explained below.

#1: You're Covered for All Schools

Taking the SAT essay means that, no matter which schools you end up applying to, you will absolutely have all their SAT requirements met. If you decide to apply to a new school that requires the SAT essay, that won’t be a problem because you’ll already have taken it.

If you already are absolutely certain about which schools you’re applying to and none of them require the essay, then this may not be a big deal to you. However, if you have a tentative list of schools, and you’ve been adding a school or removing a school from that list occasionally, you may want to be better safe than sorry and take the SAT essay, just in case.

body_coverbase.jpg

Taking the SAT essay means you have all your bases covered, no matter which schools you end up applying to.

#2: A Good Score May Boost Your Application Slightly

While it’s highly unlikely that your SAT essay will be the deciding factor of your college application, there are some cases where it can give you a small leg up on the competition. This is the case if a school recommends, but doesn’t require the essay, and that school is particularly competitive.

Having a strong SAT essay score to submit may strengthen your application a bit, especially if you are trying to show strong English/writing skills.

Want to improve your SAT score by 160 points?   We have the industry's leading SAT prep program. Built by Harvard grads and SAT full scorers, the program learns your strengths and weaknesses through advanced statistics, then customizes your prep program to you so you get the most effective prep possible.   Along with more detailed lessons, you'll get thousands of practice problems organized by individual skills so you learn most effectively. We'll also give you a step-by-step program to follow so you'll never be confused about what to study next.   Check out our 5-day free trial today:

Drawbacks to Taking the SAT Essay

There are also costs to taking the SAT essay; here are three of the most common:

#1: It's Another Section to Study For

If you choose to take the essay, that means you have an entire extra SAT section to study and prepare for. If you already feel like you have a ton of SAT prep to do or have doubts about staying motivated, adding on more work can make you feel stressed and end up hurting your scores in the other SAT sections.

#2: It Makes the Exam Longer

Taking the essay will, obviously, increase the total time you spend taking the SAT. You’re given 50 minutes to write the essay, and, including time needed for students not taking the essay to leave and things to get settled, that will add about an hour to the test, increasing your total SAT test time from about three hours to four hours.

If you struggle with keeping focused or staying on your A game during long exams (and, let’s be honest, it’s not hard to lose concentration after several hours of answering SAT questions), adding an additional hour of test time can reduce your test-taking endurance and make you feel tired and distracted during the essay, likely making it hard for you to get your best score.

#3: The Essay Costs Extra

Taking the SAT with the essay will also cost you a bit more money. Taking the SAT without the essay costs $46, but if you choose to take the essay, it costs $14 extra, raising the total cost of the SAT to $60.

However, if you're eligible for an SAT fee waiver, the waiver also applies to this section of the exam, so you still won't have to pay anything if you choose to take the essay.

body_dollars-2.jpg

Taking the essay likely means the cost of taking the SAT will be slightly higher for you.

Should You Take the SAT Essay? Five Scenarios to Help You Decide

Now you know what the SAT essay is and the pros and cons of taking it. So, what should you decide? Five scenarios are listed below; find the one that applies to your situation and follow the advice in order to make the best decision for you.

Scenario 1: You're planning on applying to at least one school that requires the essay

As mentioned above, if even one school you’re thinking about applying to requires the SAT essay, you should take it in order to avoid retaking the entire SAT again at a later date because you need an essay score.

Scenario 2: None of the schools you're applying to look at essay scores

If none of the schools you’re thinking about applying to even look at SAT essay scores, then you shouldn’t take it. Even if you get a perfect score, if the schools don’t consider essay scores, then taking it will have no benefits for you.

Scenario 3: The schools you're applying to don't require the SAT essay and aren't highly competitive

In this case, you don’t need to take the SAT essay, unless you’re trying to make up for weak writing skills in other parts of your application.

Scenario 4: The schools you're applying to recommend the SAT essay and are more competitive

For this scenario, you should take the SAT essay in order to give your application an extra boost, unless you really think you’d perform poorly or preparing for and taking the essay would cause your scores in other sections to decline.

Scenario 5: You aren't sure where you're going to apply yet

If you’re not sure which schools you want to apply to, then you should take the SAT essay, just to be safe. This way you’re covered no matter where you end up applying to college.

body_confusedpanda-2.png

If the thought of figuring out which colleges to apply to has you as confused as this blue panda, your safest option is to take the SAT essay.

Because of the College Board’s recent decision to make the SAT essay optional, students are now faced with the decision of whether they should take it or not.  The best way to decide is to learn the essay policy for each of the colleges you're interested in applying to.  Some schools will still require the essay, some won’t even look at an applicant’s essay scores, and other schools don’t require the essay but will look at your score if you do take it.

Use these school policies to help decide whether you should take the essay. Remember, if you end up needing to submit an essay score, you will have to retake the entire SAT, so make sure you have accurate and up-to-date information for each school you are thinking of applying to.

What's Next?

Have you decided to take the essay and want to know how to start studying? We have a step-by-step guide that explains how to write a great SAT essay.

Want more examples of sample prompts? Here are all of the real SAT essay prompts that have been released by the College Board.

Are you aiming for a perfect SAT essay score?  Check out our guide on how to get a perfect 8/8/8 on the SAT essay.

Disappointed with your scores? Want to improve your SAT score by 160 points?   We've written a guide about the top 5 strategies you must use to have a shot at improving your score. Download it for free now:

Trending Now

How to Get Into Harvard and the Ivy League

How to Get a Perfect 4.0 GPA

How to Write an Amazing College Essay

What Exactly Are Colleges Looking For?

ACT vs. SAT: Which Test Should You Take?

When should you take the SAT or ACT?

Get Your Free

PrepScholar

Find Your Target SAT Score

Free Complete Official SAT Practice Tests

How to Get a Perfect SAT Score, by an Expert Full Scorer

Score 800 on SAT Math

Score 800 on SAT Reading and Writing

How to Improve Your Low SAT Score

Score 600 on SAT Math

Score 600 on SAT Reading and Writing

Find Your Target ACT Score

Complete Official Free ACT Practice Tests

How to Get a Perfect ACT Score, by a 36 Full Scorer

Get a 36 on ACT English

Get a 36 on ACT Math

Get a 36 on ACT Reading

Get a 36 on ACT Science

How to Improve Your Low ACT Score

Get a 24 on ACT English

Get a 24 on ACT Math

Get a 24 on ACT Reading

Get a 24 on ACT Science

Stay Informed

Get the latest articles and test prep tips!

Follow us on Facebook (icon)

Christine graduated from Michigan State University with degrees in Environmental Biology and Geography and received her Master's from Duke University. In high school she scored in the 99th percentile on the SAT and was named a National Merit Finalist. She has taught English and biology in several countries.

Ask a Question Below

Have any questions about this article or other topics? Ask below and we'll reply!

COMMENTS

  1. SAT School Day with Essay

    SAT School Day with Essay. If you are taking a state-provided SAT, you may be required, or have the option, to answer an essay question as part of your test. The SAT Essay is a lot like a typical college writing assignment that asks you to analyze a text. It shows colleges that you're able to read, analyze, and write at the college level.

  2. What Is the SAT Essay?

    The SAT Essay is a part of the test that is only administered in certain states. Learn how to prepare if it is included in your upcoming test ... Writing: A successful essay is focused, organized, and precise, with an appropriate style and tone that varies sentence structure and follows the conventions of standard written English.

  3. SAT Essay Prompts: The Complete List

    No extra time allowed! #5: Grade the essay, using the official essay rubric to give yourself a score out of 8 in the reading, analysis, and writing sections. #6: Repeat steps 4 and 5. Choose the prompts you think will be the hardest for you so that you can so that you're prepared for the worst when the test day comes.

  4. Full-Length Paper Practice Tests

    This full-length, linear (nonadaptive) official SAT practice test was written by the same people who wrote the SAT. Download it to get started. Full-Length SAT Paper Practice Test 1 Close Modal. Download. ... If your state offers SAT Essay as part of its in-school testing, you can find practice essay prompts and scoring explanations below.

  5. The Most Reliable SAT Essay Template and Format

    You can read the full text of the passage associated with the prompt (part of Practice Test 5) via our complete collection of official SAT essay prompts.. In the following SAT essay format, I've broken down an SAT essay into introduction, example paragraphs, and conclusion.Since I'm writing in response to a specific prompt, some of the information and facts in the template will only be useful ...

  6. PDF The SAT® Practice Essay #1

    Adapted from Paul Bogard, "Let There Be Dark." ©2012 by Paul Bogard. Originally published in Los Angeles Times, December 21, 2012. At my family's cabin on a Minnesota lake, I knew woods so dark that my hands disappeared before my eyes. I knew night skies in which meteors left smoky trails across sugary spreads of stars.

  7. How to Write an SAT Essay, Step by Step

    This is the argument you need to deconstruct in your essay. Writing an SAT essay consists of four major stages: Reading: 5-10 minutes. Analyzing & Planning: 7-12 minutes. Writing: 25-35 minutes. Revising: 2-3 minutes. There's a wide time range for a few of these stages, since people work at different rates.

  8. SAT Essay Strategies and Advice

    SAT Writing Test Format and Strategies; SAT Essay Strategies and Advice; The SAT Essay Written by tutor Ellen S. The SAT has undergone a significant number of changes over the years, generally involving adjustments in the scoring rubric, and often in response to steadily-declining or increasingly-perfect test scores. When the SAT was changed in ...

  9. SAT Writing Practice Tests

    Our free SAT Writing Practice Tests are each a selection of 10 to 12 questions, which will give you a cross-section of topics from the Writing section of the official SAT. You might think of them as little quizzes, which you can use to hone your skills. To get a more comprehensive idea of the concepts you need to review, try one of the Full ...

  10. Digital SAT Reading and Writing

    Community questions. Now that you're familiar with the question types on the SAT Reading and Writing test, it's time to tackle some medium-difficulty questions. Work through each skill, taking quizzes and the unit test to level up your mastery progress.

  11. SAT Essay Prompts (10 Sample Questions)

    You can utilize these Essay SAT prompts as 10 sample SAT Essay questions for easy practice. This set of SAT Essay prompts is the most comprehensive that you will find online today. The predictability of the SAT Essay test necessitates students to perform an organized analytical method of writing instead of thinking up random ideas on their own.

  12. SAT® Writing Practice Tests and Questions

    Q1. Q2. Q3. Q4. Q5. A certain number of mandatory volunteer service hours are required for many high school students to graduate. Such service, be it serving meals at a soup kitchen, when they create crafts with kids at the library, or helping people at a senior citizens' home, has received a lot of attention and backlash.

  13. About the digital SAT Reading and Writing test

    The Reading and Writing section of the digital SAT is designed to test students on reading comprehension, rhetoric, and language use by having them engage with academic and literary texts. Skills on the Reading and Writing test can be split into the following four categories: Information and Ideas: Use, locate, interpret, and evaluate ...

  14. SAT Writing and Language: Practice tests and explanations

    The SAT writing and language test consists of 44 multiple-choice questions that you'll have 35 minutes to complete. The questions are designed to test your knowledge of grammatical and stylistic topics. The SAT Writing and Language questions ask about a variety of grammatical and stylistic topics. If you like to read and/or write, the SAT may ...

  15. 6 SAT Essay Examples to Answer Every Prompt

    Here are a couple of examples of statistics from an official SAT essay prompt, "Let There Be Dark" by Paul Bogard: Example: 8 of 10 children born in the United States will never know a sky dark enough for the Milky Way. Example: In the United States and Western Europe, the amount of light in the sky increases an average of about 6% every year.

  16. 9 Hacks for the SAT Writing & Language Section

    Remember Hack #2! If two answers are equally right, they're BOTH WRONG! Much as a semicolon = period (for the SAT), two commas punctuating an appositive = two dashes = a set of parentheses. If these are the only differences between two answer choices, they're both wrongeddy wrong wrong wrong.

  17. The Reading and Writing Section

    The questions on the Reading and Writing section fall into four content domains: Information and Ideas. Measures comprehension, analysis, and reasoning skills and knowledge and the ability to locate, interpret, evaluate, and integrate information and ideas from texts and informational graphics (tables, bar graphs, and line graphs).

  18. Moving from Official SAT Practice to Official Digital SAT Prep on Khan

    Digital SAT Reading and Writing: One test for Reading and Writing: While the pencil-and-paper SAT tested reading and writing in separate test sections, the Digital SAT combines these topics. Shorter passages (and more of them): Instead of reading long passages and answering multiple questions on each passage, students taking the Digital SAT ...

  19. SAT Essay Scoring Rubric

    In the middle are "some" and "effective," scores of 3 and 4 respectively, and probably where most students score. More or less the same scale, with different words, also applies to analysis and writing. It's worth reiterating that SAT readers are held exactly to this scale and the specific breakdown under each score.

  20. SAT Essay Tips: 15 Ways to Improve Your Score

    A less effective essay might also try to discuss cheekbones, eyebrows, eyelashes, skin pores, chin clefts, and dimples as well. While all of these things are part of the face, it would be hard to get into detail about each of the parts in just 50 minutes. " The New Dance Craze ." ©2015-2016 by Samantha Lindsay.

  21. 2024 SAT Changes: What You Need To Know

    The SAT has shortened its Reading & Writing passages to 150 words or less; the ACT Reading and English passages can still go up to 800. A shorter pace-per-question. ACT Math gives you one minute per question; the SAT gives you about one and a half.

  22. Illinois schools switch from SAT to ACT for student assessment

    Illinois started using the SAT with Essay as the state assessment for 11 th grade students in spring 2017. Two years later, it began using the PSAT 8/9 exam for 9 th grade students and the PSAT 10 ...

  23. What's on the SAT

    The Math Section: Overview. Types of Math Tested. SAT Calculator Use. Student-Produced Responses. Find out what's going to be on each section of the SAT so you can prepare for test day.

  24. Can you spot the bot? Identifying AI-generated writing in college essays

    The assessment was administered in an online survey and included an AI Identification Test which presented pairs of essays: In each case, one was written by a college student during an in-class exam and the other was generated by ChatGPT. ... (mean: 7.93, SD: 2.29, range: 2-10). Two of the authors also scored the student essays for writing ...

  25. SAT Essay Rubric: Full Analysis and Writing Strategies

    The SAT essay rubric says that the best (that is, 4-scoring) essay uses " relevant, sufficient, and strategically chosen support for claim (s) or point (s) made. " This means you can't just stick to abstract reasoning like this: The author uses analogies to hammer home his point that hot dogs are not sandwiches.

  26. Key stage 2 attainment: National headlines, Academic year 2023/24

    In reading, 74% of pupils reached the expected standard in 2024, up from 73% in 2023. This figure has fluctuated between 72% and 75% since 2017. In writing teacher assessment, 72% of pupils reached the expected standard in 2024, up from 71% in 2023. Before the pandemic, in both 2018 and 2019, this figure was 78%. Attainment in writing is not directly comparable to some earlier years (2016 and ...

  27. Should I Take the SAT Essay? How to Decide

    If you choose to take the essay, it will be its own section of the SAT, and the score you get on the essay will be separate from your score on the rest of the exam. Your main SAT score will be out of 1600 while your essay will be graded across three different categories: Reading, Analysis, and Writing. For each area, your essay will be given a ...