top of page

Choosing Not to Persevere

Max Tran '22 

When I was in 7th grade, a rare display shelf opened up in my mother’s elementary school library. The shelf was short enough to be reached by younger students, tall enough to be seen by classroom teachers, and deep enough to hold improvised mechanisms. As a frequent visitor, I knew that this display space was an opportunity that comes as often as winning the Mega Millions Jackpot. A few rolls of tape and cardboard boxes later, I had turned the display into Readbox, a cardboard vending machine covered with crinkly red butcher paper. What I was most proud of were the paper locks and cylindrical key, a foot-long cardboard stick that could poke around inside to dispense books. 

Putting the Readbox under lock and key was supposed to protect it (and to prevent worrying about unauthorized usage outside of school hours). 

What is worry? You might think worry is a thought of an impending deadline, a point at which no more tape can be added, and your unfinished contraption is left to stand on its own. But I’ve noticed that my worrying often begins when the deadline ends. When I’m working on a project, with things I can see, understand, and control, like writing a program or protecting a server from attacks, there are few things to worry about. Instead, it’s often the things I can’t see that spark new worries, such as ripped red butcher paper to get to the books inside a Readbox. Sometimes, I worry about the subtle ways I can be changed by others, potentially losing connection to myself, my identities in values and culture. While I cannot always control the environments I am in, I still do what I can, making locks and keys to guard the important parts of myself from changing — especially through popular culture, or a desire to conform and fit in. 

But, when the Readbox broke and was decommissioned, it also opened an opportunity for a new display. 

Hidden behind Artificial Intelligence (AI) is the neural network, an electronic representation of the human brain. At first, neural networks seem to be solely about computers, especially, how to translate human concepts of learning (such as making mistakes) into languages that can be understood by a computer. But in the process of translation, you also realize that you’re learning about yourself. By digitizing human problem-solving, you can become more aware about how you learn and approach problems in the world. 

Like drawing a blueprint, the first step in Artificial Intelligence is to design a neural network by creating and organizing artificial neurons. Then you have to train the neural network. Like teaching a younger cousin to ride a bike, you give a neural network examples and tips, and guide its attempts to learn. This is also a time when a neural network can break down. 

Whether it’s due to bad examples or poor planning, breaking a neural network can be frustrating. The process is similar to losing an important essay in Microsoft Word, except that machine learning models often take weeks or months of borrowed computer processing time to train. Sometimes breaking changes in a neural network propagate so quickly, you don’t have time to pull the power cord. 

A few years after the Readbox had been taken down, my younger sister and cousins conspired over Zoom to convince me to play Roblox, an online game that we could play together. For months I was surrounded by incessant sister-led sales pitches, messages, and advertising to create an account. My aversion to Roblox stemmed from a slippery slope: if I gave in to this element of popular culture, what would be next? 

One of the strengths of training a neural network is its persistence. After you design the network architecture and prepare examples for it to study, it can spend weeks, months, or even years teaching itself. Until it’s told to stop, neural networks persist on training, looking for patterns in your examples, and attempting to copy them. 

But what if your examples are flawed, incomplete, or are even broken and incorrect? 

For example, imagine you’re training a neural network to recognize pictures of apples. If you only give it example pictures of red apples, it might not be able to recognize a green apple. Whether you couldn’t find pictures of green apples—or haven’t seen a Granny Smith for some time—in this scenario, the neural network’s perseverance is a hindrance, making it overly specialized in recognizing pixels of red apples. This is where the translations fall short. Currently, neural networks see through the eyes of their creators. When a creator expresses biases, neural networks amplify them, voraciously turning the smallest flaws (implicit bias) into a learned pattern (explicit bias). 

Both humans and their neural networks have persistence. But what currently makes humans different is our haphazard approach to it. Sometimes our willingness to change our goals helps us dynamically grow and change in ways we wouldn’t have if we’d have followed our original path.  

Sometimes, our willingness to give in and play Roblox can lead to enjoyable new surprises. 

Featured Review
Tag Cloud
bottom of page