The student news site of Francis W. Parker School

The Parker Weekly

The student news site of Francis W. Parker School

The Parker Weekly

The student news site of Francis W. Parker School

The Parker Weekly

Interweb Intel

Digitizing the SAT
Interweb+Intel
Photo credit: The Parker Weekly

When I walked in to take the first ever Digital SAT, I thought I had beat the system. When I walked out I felt as if the system had beaten me. 

I was among the first group of students to take the SAT in its new digital format. I knew that what I was signing up for was essentially a trial run. Sure, it was an official test, but this was the first time the software would be put through a full scale run, and as a programmer I know how often those are successful. There was also evidence that the software wasn’t perfect. During PSAT testing, test takers had reported the program randomly crashing and invalidating their scores. However with the benefits as well as my interest in the technological space, I decided to take the SAT.

For those who aren’t familiar, the new SAT format consists of two parts, Reading and Math, each containing two sections. The first section is the same for everyone and the second section is either less or more difficult based on how well you did on the first one, which might prompt some to question the standardness of the test, but oh well. This is the first part of how the SAT takes advantage of its new digital format. It allows a closed feedback loop for the difficulty of the test.

The second advantage of the new testing system is that it is short, mainly because you don’t have to input your information. All you have to do is open your computer, type in your room code, and start testing. Your information is automatically loaded up. The entire test takes only two hours and 14 minutes, which is perhaps both a good and bad thing, but more on that later.

The final advantage from a student perspective is that you get access to tools that aren’t available on written equivalents. And by tools, I mean Desmos. Desmos is included for both math sections of the SAT, and the first thing I did when a section loaded was open it. Desmos not only matches a graphing calculator with a better interface, it offers easy equation solving, lines of best fit. I was surprised that, from what I saw, almost everybody else in my room was using their calculator. I didn’t even get mine out of my bag. 

The digital SAT makes testing not only easier for students but also for proctors. From what I could tell they seem to have an admin panel which can manage the room they are assigned and see the progress of their test takers. Instead of having to read large amounts of instructions, there are a lot fewer because the software guides you through them. There is also  the simple fact of not having to handle and be responsible for large numbers of SATs. All a proctor has to do is make sure all tests  are completed before we leave.

At this point, it seems like the obvious choice, which is exactly what I thought going into the test. I made it through the first three sections with ease and plenty of time left on the clock to check over answers. By the time I started the second math section, my mind was more focused on the start of the Saudi Arabian Grand Prix than the next 22 questions. I started answering the questions and was on a good pace to finish with time to spare until I hit Question 15. As opposed to simple to understand math problems, Questions 15-22 were either long confusingly worded paragraphs or ones that simply required you to plug in all the possible answers. Fifteen minutes quickly turned into five and five quickly turned into one. I answered the last question with four seconds left. As you could imagine, I was by no means confident. Walking out of the exam I had no idea how I did. And that’s one of the opportunities I see in the digital SAT. Theoretically it could be graded instantly and your score returned to you before you leave the room. 

Or maybe not. See, unlike the ACT, the SAT is graded ambiguously. There isn’t a specified number of questions per point interval. This strategy leaves room for the College Board to curve the weight of the questions depending on how the majority of people do. I see the reason why they take this approach as they want the test to be an accurate comparison to other test takers. But it also makes their tests look better, raising the average score by removing the questions that even the top test takers can’t figure out. As a test taker I’d rather them fix the test before I take it than after.

However my opinions on the actual test aside, I would want to see them automate their grading scale eventually, to provide perhaps same day scores instead of in two weeks. Once all the test centers are done, the tests could be run through a variety of machine learning algorithms to calculate the correct weight of each question, a process that is probably close to what they do now anyway. By decreasing the time between testing and score releases, you remove the wonder and allow students to have more time to sign up for the next test date.

Overall I enjoyed the testing process. It was both convenient and efficient. Other than the aforementioned second math section, the testing app Bluebook provided representative practice questions in a similar testing environment. Plus since the test, I’ve been told that they released more representative math problems. I’m already signed up to take another one and instead of dreading it, I look forward to it in a way.

More to Discover