Methodology
How Syllogic classifies LSAT questions
Most analytics tools stop at section-level scores. Syllogic classifies every question along four independent axes—cognitive operation, difficulty, section position, and cross-cutting features—then reports performance at their intersections. The result is a diagnostic that tells a tutor exactly which conditions produce errors, not just which section was weak.
1. A four-axis taxonomy
Each question belongs to one of three reasoning modes—propositional, inductive, or structural—and one of 39 sublabel types within those modes. That classification is then augmented with a 1–4 difficulty rating, section position, and zero or more cross-cutting tags (conditional logic, viewpoint tracking, causal reasoning, etc.).
The taxonomy is calibrated against 59 PrepTest editions. None of this data is published by LSAC; it is independently derived from answer-key metadata and question-type analysis.
2. Position is a variable, not noise
A Weaken question at position 8 and one at position 22 carry different cognitive loads even when the underlying reasoning task is similar. Late-section items layer time pressure, fatigue, and passage complexity on top of the core skill. Syllogic tracks position as a first-class dimension so tutors can distinguish between a student who cannot weaken arguments and one who can weaken arguments fine—until position 18.
3. Intersections over averages
The analytics report on sublabel × position × difficulty × tag intersections. A student might be 80% on Weaken overall but 40% on Weaken + conditional logic + positions 15–25. The aggregate hides the breakdown. Syllogic surfaces it.
Drill generation works the same way. Tutors build practice sets by filtering on any combination of axes: “10 unattempted Weaken questions, positions 13–24, difficulty 2–3, tagged conditional-logic.” The output is a problem set with a specific diagnostic rationale, not a grab bag.
4. Tutor tags as a parallel axis
The platform taxonomy covers cognitive structure. It does not cover pedagogy. Tutors add their own tags—“day-1-homework,” “conditional-mastery,” “review-before-test”—and those tags participate in analytics and drill generation alongside the built-in axes. A tutor who organizes around skill progressions sees their categories in the same breakdowns as the platform taxonomy, not in a separate silo.