UA / RU
Поддержать ZN.ua

Education Mirrored by Exams: What We Really See in the Results

The results of the NMT are always a topic of public discussion. Yet most often, attention centers on just two questions: how many students scored the maximum 200 points (which we celebrate), and how bleak the state of science and mathematics education remains (which we lament). In fact, the test data offers far more grounds for debate and serious conclusions.

The NMT results reveal not only which students gained admission to universities but also what is—or is not—being taught in schools. Yes, it is only a snapshot of the “output,” the knowledge of graduates, and we do not know how much of the scores reflect the work of tutors, lucky guesses or stress. Still, the trends are evident. The question is whether we want to face them. And if we do, whether we will act on them. Here are several important trends.

The education divide: a thin layer of strong students

I am not speaking here about the knowledge gap between children from urban and rural areas or between prestigious lyceums and ordinary schools in residential districts—all of that, as always, is part of the picture. Tests in many subjects clearly show a thin layer of strong students (in some subjects, almost transparent), and a much thicker layer of those who produced disappointing results. In other words, mass education is of poor quality. This is most striking in the natural sciences and mathematics, and it has been the case for years. The report of the Ukrainian Center for Educational Quality Assessment (UCEQA) contains a detailed analysis of one test session, which provides a vivid illustration.

Читайте также: Good Land Does Not Lie Fallow? Is Depopulation a Threat for Ukraine?

Let’s look at mathematics and physics. Imagine the 100–200 point scale as a ladder, each rung representing 10 points. Now let’s use a small human figure to indicate the number of participants whose work was analyzed: one figure corresponds to roughly 10 percent of the participants. The test results would then look like this:

ZN.UA

ZN.UA

The NMT scores in mathematics reveal a stark contrast. On the one hand, one in five participants failed to score above 108 points, and half did not surpass 132 points. On the other hand, mathematics produced the most 200-point scorers: 1,177 students. For comparison, there were only 659 such scorers in the Ukrainian language test.

This is a systemic problem observed for years, not only in the External Independent Testing (EIT, former national university entrance exam used before Russia’s invasion in 2022): we see a high level of preparation in schools that adhere to educational traditions, cooperate with universities and involve students in academic competitions, while mainstream schools show a complete inability to provide even a basic level of mathematical literacy.

The gap between strong and weak participants is also evident in physics: one in five scored 108 points or less, and half were unable to rise above 132 points. And this is despite the fact that physics, unlike mathematics, is an elective subject—meaning that those who chose it did so consciously and with preparation (which may explain why the gap is slightly smaller). But in both subjects, the majority clustered at the lower end of the scale.

When such results in natural sciences and mathematics are discussed, people usually point to the humanities, claiming that “things are more or less fine there.” Indeed, the NMT results in the Ukrainian language look better:

ZN.UA

But this should not reassure us—the overall level of knowledge here is also a long way from okay, and here’s why.

The Ukrainian language test was the easiest—and this is not a matter of opinion but a fact confirmed by psychometric indicators. The test contained many simple tasks that allowed weaker participants to scrape through and boost their scores. It did not include tasks like those in physics or mathematics, nor a written essay, as in earlier language tests. Of the 30 tasks, 25 were multiple-choice. To be fair, there were also more challenging tasks, which allowed stronger participants to distinguish themselves. Still, the test masked the truly weak students, so we cannot see the real gap between them and the strong ones.

Читайте также: From Resource Curse to Economic Miracle. How Ukraine Can Avoid Becoming the New Somalia

When a test is filled with simple tasks, it means only one thing: you can train for it. And deep knowledge of the subject is not required. Here’s an example: the Ukrainian language section of the NMT included tasks from previous years—and students performed them flawlessly. But when a new task appeared that tested the same rule, success rates dropped immediately. Another task required inserting the correct letter into words. The test even hinted at where in the word it belonged. This is the level of a routine school quiz. Unsurprisingly, 77 percent of participants succeeded. But as soon as the format changed, performance declined. Another task asked students to read several excerpts and identify the one without mistakes. Only 34.5 percent succeeded. Why? Because this required not inserting letters mechanically but applying the rule independently. Wherever language ceases to be a set of rules and demands comprehensive understanding, problems arise. Should we really celebrate the 77 percent success rate in this context? This shows that schools and tutors mostly train students to recognize, not to think.

Overall, the NMT tests revealed common problems across subjects. Most students have fragmented knowledge and lack basic skills. They find short tasks—sets of words or formulas—easier than working with texts (since reading books is also a problem). Students may know rules, terms or formulas (and even that is not always the case), but they struggle to understand what lies behind them or how they work.

Here is a telling example. In history, 50–80 percent of participants usually recognized a landmark or biography with ease, and 93 percent had no problem matching a term with its definition. But only 20 percent managed to establish a sequence of events, and nearly 70 percent failed to name the main goal of the Resistance movement during World War II. Students are at a loss when required to think not in clichés but in chronology and logic, when events must be placed in context, especially a global one.

Taking shots at mathematics

Among the current NMT results there is a revealing, almost symptomatic fact: six participants scored the highest marks in the Ukrainian language—190–200 points—but failed math. There were five such cases in Ukrainian history: history high-achievers received zero points in mathematics. Such results, excellent in one subject and a total failure in another, are typical of those who believe they do not need mathematics but are forced to take it as a compulsory subject. They channeled all their energy into preparing for the Ukrainian language test. And this, of course, is far from true learning. An educated person cannot be so mathematically illiterate as to fail to correctly answer even five out of 22 questions, with a reference book in hand (this was the pass/fail threshold). One could have left the problems untouched. Incidentally, according to the UCEQA, among those who failed the Ukrainian language, one-third also failed mathematics. And this has been the case for years.

There is a heated debate about whether mathematics should remain compulsory for NMT. Parents often complain: “Because of your mathematics, my child won’t be able to become an actor (musician, artist, designer...).” According to my information, this is precisely the pressure from creative universities, which want to bypass either the NMT entirely or at least the mathematics component.

Читайте также: Gallup Data Reveal Ukrainians’ Shift: From Euphoria to Realism, Not Optimism to Pessimism

Yet compulsory mathematics is not unusual. For example, in the US the equivalent of the EIT—the SAT (Scholastic Assessment Test), without which you cannot enter a university or college—consists of two main sections: reading & writing, and mathematics. This test is also recognized in some other English-speaking countries. Poland’s equivalent, Matura, also requires both Polish and mathematics.

I am convinced that the division in schools into “humanities” and “math” students is artificial. It benefits both teachers and students (their parents too). It gives an excuse for problems: a student does not know mathematics not because they were poorly taught or unwilling to learn but because they were born that way. It is an informal exemption from learning. Eric Berne, in Games People Play, describes a game called Wooden Leg. Its essence is that a person cites a supposed flaw—real or imagined—to avoid responsibility. As Berne writes, “What do you expect of a man with a wooden leg?” In a school context, it sounds like: “I’m a humanities student—why expect me to study algebra?” Or: “I can’t teach him math—he’s a humanities student.” Psychologist Pylyp Dukhliy has explained well why mathematics cannot simply be written off in schools and why it is important for everyone.

A test without rough edges

When analyzing the NMT results, we must understand how they are produced. Perhaps the most important element is the pass/fail threshold, which immediately eliminates those with unacceptably low results from the admissions race. The entire 100–200 scale is then built upon this threshold. This year, as before, the threshold was low: 15 percent of test points had to be earned in each subject. At the same time, all tests are full of “warm-up” tasks that allow weaker students to clear the bar—multiple-choice questions with obvious answers.

And even with such a low threshold, natural sciences, mathematics, and literature top the failure rates: about one in ten failed mathematics (11.9%) and physics (10.5%); in chemistry the figure was 6.2 percent, in Ukrainian literature 2.6 percent, and in foreign languages between 4 and 6 percent (depending on the language). In the remaining subjects, the failure rate was minimal—from 0.1 to 0.4 percent. Note that the humanities are also among the weakest performers. In fact, the results could have been even worse if the threshold were higher.

Читайте также: Scientific “Ramstein” for Ukraine: How an International Coalition Is Helping Rebuild Science

How is the threshold determined? Before the war, the process involved a group of experts—teachers from different regions and types of schools, as well as scholars. After the EIT session ended and papers were submitted, they took the test themselves and judged which tasks a minimally prepared applicant could complete. That is, one whose knowledge was weak but still met minimum educational standards. This data was statistically processed, and a special expert commission then voted on the threshold score. The commission’s meetings were streamed online so that everyone could follow the reasoning.

Now, the NMT is conducted under more difficult conditions, and not within a few days but over a longer period. It is therefore physically hard to organize an expert commission for every test. Thus, the threshold is now set by order of the Ministry of Education and Science. And it is set long before the NMT begins, as early as February. This means the test is no longer designed first and then judged against a threshold; rather, the threshold dictates the design of the test: UCEQA must develop tests that fit the predetermined standard.

Scores are now calculated differently as well. Previously, they were ranked, reflecting not only a participant’s knowledge but also their position relative to others. First, those who failed were eliminated by the threshold, and then the remaining participants were ranked and assigned scores on the 100–200 scale. This gave a more precise measure of knowledge, showing both an individual’s result and the broader context. You may recall calls from UCEQA and the Ministry of Education to tutors: “Don’t take the NMT for fun, you are raising the bar and making your students’ results look weaker.” Now, the NMT score is not a ranking but resembles an ordinary school grade—a fixed list of how many mistakes correspond to which score. The scoring instructions are contained in the same order from the Ministry, issued long before the NMT begins.

Tests in all subjects now lack open-ended questions where students must express their own views rather than select from ready-made answers. This not only affects result quality but also signals to schools that the focus should be on drilling. Because the school system is oriented entirely toward the NMT.

Читайте также: Why Children’s Homes Must Vanish — And What Will Replace Them

At the very start of the EIT, open-ended tasks existed, but they were abandoned for lack of funds—since such tasks must be checked by real people, whose work must be paid for. The Ukrainian language part of the exam once had an essay, English had a written response, and natural sciences and mathematics had problems requiring full solutions. Problems are still included in the test, but now NMT participants must only write the final answer on the test form, whereas previously they had to show the entire solution. The elegance and rationality of the solution also influenced the score, and this signaled to schools what to teach. A well-known case illustrates this: one student solved a geometry problem in an unusual, unique way that the examiners initially could not evaluate. Only the appeal commission at UCEQA recognized the validity of the solution. That student received the highest score. Obviously, he had an excellent teacher or tutor.

In the past, science and math tests expected participants to know basic formulas. Now, in the NMT, it is permitted to use a reference book with formulas—another signal to schools and tutors: multiple solution methods do not matter (though this is what develops scientific thinking), and it is not necessary to memorize the fundamental formulas and laws. And here is the outcome: even with a reference book in hand, 38 percent of test-takers failed to correctly square a binomial. One in five marked as correct an answer that contradicted the triangle inequality.

We see it now. What next?

The problems revealed by this year’s NMT results did not appear overnight; they have been mounting for years. It is good that we have regular annual reports from UCEQA. But these need to be analyzed in depth. Without a systematic understanding of causes and consequences, we remain hostages to illusions. Educational analysts should provide such analysis. Yet despite an army of institutes and academies living off “analytics” and “reforms,” Ukraine does not even have a concept of educational analytics, let alone an open database to be regularly updated and examined.

Читайте также: “Do Not Give Up”: Learning Under Fire in Kherson

What is needed, based on research results, is feedback to schools: not pompous speeches at conferences before sponsors but concrete recommendations and support for teachers and those who train them. The NMT itself also needs support so that it does not become a hostage to populism and can continue to develop. But that is another discussion, which will be the focus of the second part of this article, to be published soon.