Navigating the August LSAT: Understanding the Curve and its Implications
The August LSAT witnessed a transformation with the removal of the Logic Games section and its replacement by an additional Logical Reasoning section. This format change has spurred diverse reactions among test-takers, especially considering the LSAT's role as a standardized, pattern-based test with scores valid for five years. The Law School Admission Council (LSAC) has strived to maintain score comparability with the previous format, but concerns about the test's difficulty persist.
Perceived Difficulty and Test-Taker Experiences
A common sentiment among those who took the August LSAT was the complexity of a particular Reading Comprehension passage, which demanded more in-depth analysis. Some students reported encountering tricky questions with deceptive answer choices in the Logical Reasoning section. Some students reported scores lower than their practice test results, leading to more concerns about the test’s overall difficulty. And, since practice tests are largely released LSATs, they would certainly be representative of what students can expect to see on LSAT test day. First, consider self-selection bias. Test takers whose final scores are aligned with their expected scores are probably less likely to share their experience on the interwebs. Furthermore, without the Logic Games Section, test takers find it more challenging to assess their performance during the LSAT. In Logic Games, a well-executed setup typically makes the correct answers clear. Any errors in setup or deductions quickly become apparent as you struggle with the questions. In contrast, the Logical Reasoning and Reading Comprehension sections lack this direct feedback loop.
The Impact of Logic Games Removal
When the LSAT included Logic Games, test-takers often experienced significant score improvements because they started with minimal prior experience in this area. However, the reading style required for the Reading Comprehension and Logical Reasoning sections differs significantly from everyday reading habits. The initial gains come quickly. However, as you get stronger, each additional pound of muscle takes exponentially more effort and discipline, coupled with a detailed plan and some expert guidance. Registrations for the 2024 August and September LSAT test dates were up significantly from the same time last year. It’s possible many test takers were waiting on the sidelines until Logic Games were removed. Maybe they believed the LSAT would be easier without them.
Changes to Reporting of LSAT scores and repeaters
The Law School Admission Council, which administers the LSAT, has long maintained that the most accurate predictor of law school grade point average (LGPA) is the average of all LSAT scores. If a prospective student takes the LSAT twice, and scores a 164 and a 168 on those two tests, the most accurate way of evaluating that student is to conclude that the student received a 166. And schools reported to the American Bar Association the average score.
But in 2006, the ABA changed what schools need to report. It allowed schools to report the highest LSAT score, not the average of LSAT scores. For this student, then, it is no longer a 166 but a 168-even though it is less accurate in terms of the predictive value of the LSAT.
Read also: Comprehensive Guide to August SAT
This change incentivizes repeat test-takers. Back in 2010, about two-thirds of test-takers took the exam only once. More would be incentivized to repeat the exam. But, there was an upper limit on this. LSAC only administered the exam four time a year, and it only permitted test-takers to repeat up to three times in a two-year period.
But in 2017, LSAC lifted that ban and allowed unlimited retakes-it has since brought that number down to five in “reportable score period” (around 6 years) and seven overall, but still much more than three. It now offers around eight administrations of the LSAT each year, up from four.
Granted, the total number of people inclined to take the exam five times is quite small. But repeaters continue to climb. In 2023-2024, repeat test-takers formed a majority of the LSATs administered.
Additionally, about 20% of test-takers do not have a “reportable score.” That means, under an option from LSAC developed over the last two decades, you can “peek” at your score and decide to cancel after learning about the score, preventing schools from seeing your score. (This is mostly irrational behavior from prospective students because schools will still report the highest score received, but it further muddies the waters for identifying the difference between the highest score and the average score.)
So, if you are a law school inclined to look at the LSAT as a predictive tool of student success, you would want to use the average score. But now that the ABA permits reporting the highest score-and because the USNWR rankings likewise use that score-all of the incentives are to rely on the highest score in admissions decisions, even if it’s less accurate to predict student success. True, gains among repeaters tend to be modest, about 2 points for a typical test-taker. But, as I continue, there are cumulative effects to these changes.
Read also: Understanding SAT Scores
Importantly, when LSAC measures the validity of the LSAT, it still measures it using the average score. Law schools, however, typically are using the higher score-and therefore opting to use a less valid measure of the LSAT.
(Likewise, the LSAT is more valid at predicting success when combined with UGPA in an “index score,” but most law schools also do not use it this way, again choosing to use a less valid method of relying upon in it-more on that later.)
The LSAT as a Predictor of Law School Success
The LSAT is an important predictor of law school success. It does a very good job of predicting who will perform well in law school. The higher your LSAT score, the higher your law school grades are likely to be. It is not perfectly correlated, but it is well correlated. When combined with your undergraduate grade point average (UGPA)-yes, regardless of your major, grade inflation, school disparities, and all that-it can even further predict law school success.
But the LSAT has changed over the years. As has its weight in the USNWR rankings. Many law school admissions practices, however, look at the LSAT like it’s 2005-like the test scores resemble what they did back then, and like the USNWR rankings care about them like they did back then. A lot has changed in a generation.
The Impact of Extra Time on Test Predictiveness
Any mention of accommodations in test-taking is a fraught topic. But I want to set aside whatever preferences you may have about the relationship between accommodations and test-taking. I want to point out what it means-and specifically, extra time on the exam-for using the LSAT as a predictive tool.
Read also: August SAT: Dates & Prep
Data from LSAC shows that accommodated test-takers receive higher scores than non-accommodated test-takers, around four to five points. Most accommodations for test-takers translate into LSAT scores that predict law school success-for instance, a visually-impaired person receiving large-print materials will receive a score that fairly accurately predicts law school success. There is an exception, however, for time-related accommodations, and LSAT scores tend to overpredict law school success when there are time accommodations. Requests for additional time have increased dramatically over the years, from around 6000 granted requests in 2018-2019 to around 15,000 granted requests in 2022-2023.
My point here is certainly not to debate accommodations in standardized testing, but it is to point out that additional time on the LSAT makes it less predictive, and there has been a dramatic increase in such tests. In 2014, the Department of Justice entered into a consent decree with LSAC to stop “flagging” such LSAT scores. So there remains a cohort of LSAT scores, increasing by the year, that are less predictive of law school success.
The Change in Test Composition
In 1998, a technical study from LSAC looked at each of the three components of the LSAT-the analytical reasoning (sometimes called “logic games”), logical reasoning, and reading comprehension. The LSAT overall predicted first-year LGPA. And each individual component contributed to that overall score. But in 2019, LSAC entered into a consent decree on a challenge that the analytical reasoning section ran afoul of federal and state accommodations laws. And in 2023 it announced the end of that section.
I have not yet seen any subsequent technical report from LSAC (perhaps it’s out there) explaining how it reached this conclusion that a test without logic games could be as valid as a predictive measure expressly after its 1998 report. But certainly anecdotes, like this one in the Wall Street Journal, suggest some material changes:
Tyla Evans had almost abandoned her law-school ambitions after struggling with the logic games section. “When I found out they were changing the test, I was ecstatic,” said Evans, a 2023 George Washington University graduate. Her LSAT score jumped 15 to 20 points on the revised test, enabling a second round of applications. So far, she has received two sizable financial-aid offers and is waiting to hear from a few more schools. The LSAT has fundamentally changed in its content, and it suggests that scores today are not truly comparable to scores in previous eras, and that such scores will be less predictive of success.
Opting out of the Test
One more slight confounding variable, although its effect on LSAT scores is more indirect and I’ll only mention briefly. More students are attending law school who have not taken the LSAT. This comes from a variety of sources-increasing cohorts of students who come directly from the law school’s parent university without an LSAT score; alternative admissions tests like the GRE; and so on. Publicly disclosed LSAT quartiles, then, conceal a cohort of students who have selected out of taking the LSAT. It is hard to know how this precisely affects the overall composition of LSAT test-takers, but it is one more small detail to note.
USNWR and LSAT scores
A run-of-the-mill admissions office should care about LSAT scores as a predictor of law school success. Several developments in the last generation have diluted the power of the LSAT as a predictor-reliance on the highest score for repeat test-takers, unlimited retakes, additional time for some test-takers, a change in content, and a change in the cohort taking the exam. Nevertheless, despite all those changes, it is still a good predictor, or better than alternatives.
But this admissions office probably cares a great deal about something else too-law school rankings. Rightly or wrongly-again, not a debate for this post-law school rankings, particularly the USNWR rankings, play a great deal of influence in how prospective law students perceive of schools. Even marginal changes can significantly influence admissions decisions and financial aid costs, not to mention the effect on students, alumni, faculty, and the university as a barometer (again, rightly or wrongly) of the law school’s trajectory and overall health.
But USNWR does two important things with LSAT scores, one very public and recent, one more subtle and longstanding but often misunderstood.
First, USNWR has long used the median LSAT of an incoming class as a benchmark of the overall quality of the class (a decision long known to distort how law school admissions offices conduct their admissions practices as the “bottom” credentials of the incoming class looks very different from the “top”-more on that in a bit). But it changed its formula recently to focus more on outputs instead of inputs. That meant the weight it gave to the median LSAT score dropped from 11.25% of the rankings to just 5% of the rankings.
A related, and subtle, change is that by giving such significant weight to employment outcomes, now 33% of the rankings. It is not only a large category, but it is a category with a huge spread from top to bottom. That has the effect of diminishing the value of the other categories. In short, the LSAT scores matter far less in the new rankings than they did before.
Second, USNWR has long used the percentile equivalents of LSAT scores, not the raw scores themselves. This is deceptive to the outsider, because there is such an emphasis on the raw LSAT score. But as scores get higher, they actually reflect narrower and narrower changes in improvement of a ranking’s perception of the incoming class.
LSAT scores look like a bell curve. In the middle of the distribution are where most of the scores fall, around 150 to 152. At each end of the curve are increasingly small numbers of students who get each score. So the move from 160 to 161 reflects a more significant improvement, relative to others, than the move from 170 to 171.
A quick chart will help illustrate the point using a recent LSAC percentile equivalent table. The gap from 173 to 172 is small, 0.65 percentage points. From 168 to 167 larger, 1.55 percentage points. From 164 to 163 still larger, 2.5 percentage points. And from 157 to 158, 3.42 percentage points.
This plays out the same way in the USNWR rankings (roughly, subject to, among other things, the fact that USNWR includes GRE and other scores in some schools’ rankings percentile equivalencies). Using my modeled rankings measure, the scores are scaled against other schools, and in their unweighted terms, the gap between a 172 and 173 is around 0.032 raw points; 168 to 167, 0.077; 164 to 163, 0.123; and 158 to 157, 0.142. (It isn’t so dramatic between 164-163 and 158-157 here, because most law schools cluster higher on the LSAT curve than the overall LSAT curve itself.)
These unweighted numbers are meant to give you some comparisons of raw points compared against each other. But recall that these figures only receive 5% weight in the overall rankings. So that 0.032 actually converts to 0.0016-a fraction of a hundredth of a point, a rounding error in many circumstances. Without getting into all of the details of how USNWR otherwise creates a rankings formula, it is, quite clearly, a very, very small number.
What this means, then, is that law schools perform “better” in the rankings each time the median LSAT of their incoming class increases, but the marginal value of each additional point increase diminishes.
How often do changes to LSAT median scores can alter a school’s ranking?
So, the big question is this. If many schools (and most school pursuing a more “elite” ranking) care about medians in LSAT scores, what tangible difference would a change in a median LSAT score make to a school’s ranking?
This should be a very straightforward question for most law schools. That is, most law schools should make a very basic model then run a cost-benefit analysis. That is, it’s a lot of effort to pursue a median LSAT score of X (including skewed admissions decisions, financial aid costs, etc.). Is it worth it?
For years, the answer has been unhesitatingly and unflinchingly “yes” at many law schools. Or, perhaps, a begrudging and inevitable “yes.” When the LSAT was a significant part of the rankings, it could make a big difference. And schools saw those numbers
But today, the value of the LSAT is dramatically lower than it was in the past. Schools should be reassessing whether this unflinching commitment to LSAT scores is worth it. But it turns out, schools are not reassessing-they are sticking to their old practices.
Median chasing in 2025
Below are a few charts from some top 30-ish schools from LSD.law, which continues a longstanding practice of creating a site for law school applicants to submit their individualized admissions decisions and allow them to be compared in the aggregate. I slightly cropped the charts to focus on the heartland of reported acceptances as of March 21, 2025.
The charts below show the UGPA and LSAT of accepted students. The schools don’t particularly matter here. The point is that the admissions practices-or “strategy”-is essentially identical at every school, with some twists at various institutions (e.g., how they handle UGPA). There is a sharp drop-off to the left of some line that represents a school’s targeted “median” LSAT. Most accepted students to the left of that score are above the median UGPA line.
In short, just like schools back in 2013 and earlier, schools are “chasing” the median. They are ignoring high-caliber students with numbers that sit just below their medians in exchange for students who can help boost one side or the other of their medians (often, most starkly, as can be seen, the LSAT medians).
Do LSAT median changes affect the rankings?
This behavior may be rational if the LSAT matters for rankings purposes like it did in the past. But it doesn’t. It’s value has been significantly reduced. Additionally, schools ought to understand that marginal increases in LSAT score (which at the top end are extremely costly-there are fewer of those scores and more competition for them so financial aid packages can become quite costly) are even less valuable the higher the score. That said, a school might simply have the desire to get better, and the better the LSAT score, the more likely it helps an increase in the ranking.
That’s true, but how likely is it to happen?
I modeled the USNWR rankings and ran some counterfactuals to assess whether schools in a particular LSAT band would drop in the rankings. I ran estimates for the 43 schools with medi…
tags: #august #LSAT #curve #analysis

