Memory and Neural Networks


It is no secret to data scientists that humans and computers think differently.  While computers are highly adept at arithmetic, humans can hardly calculate the tip on a restaurant bill.  And while humans can learn from their previous mistakes and use logic to make better decisions in the future, computers cannot teach themselves to gain insight from previous actions.  But what if we could teach computers to learn from their previous mistakes, to consider history when making a decision, to think more like humans?  Enter the concept of neural networks.

Neural networks are computing and logic systems that are, loosely, based on how the human brain functions.  One of the most central aspects of human intelligence, although we often lack it, is logic.  We can deduce that if Sam wears red shirts on Tuesday, and if yesterday was Monday, then Sam is wearing a red shirt today.  We can also deduce that if a chair is covered by a blanket, the object still exists, even though we cannot see it.  Neural networks seek to implement the same use of logic in computers, with the hopes of having computer eventually being able to solve much more complex logic problems.

The examples of Sam’s red shirt and the disappearing chair have an essential similarity: the use of memory as a tool of logic.  Deducing logic, or answering a question, from a series of facts requires the memorization and understanding of previously given information.  We have to be able to recall the previous fact that Sam wears red shirts on Tuesdays to be able to deduce that Sam is probably wearing a red shirt today, and we have to be able to remember the properties of a chair in order to deduce that the chair still exists even when we cannot see it.

Last night, as part of the ongoing Economics and Big Data series hosted by NYU, Sainbayar Sukhbaatar, a current PhD student in NYU’s Computer Science department, gave a two-part lecture titled, “Memory and Communication in Neural Networks.”

Sukhbaatar used two types of examples for the memory portion of his talk: deducing an outcome from a series of previously given information (similar to Sam’s red shirt), and predicting the last word in a sentence.  An example of the latter would be asking a computer to complete the following sentence: “We are out of groceries, so I am going to the (blank).”  Both types of problems use neural networks, and both situations require memorization, with the ability to later recall information.

While the processes of recollection and interpretation are embedded into a given set of code, Sukhbaatar’s techniques allow computers to store information, and then later access data when predicting an outcome.  Information is stored in an external source or place, and this aspect is similar to how humans use memory.  We can memorize a fact or an event, store this piece of information in our brain, and then access it later; we do not have to constantly place a piece of information at the forefront of our consciousness in order to later use it in deducing a logical outcome.  In a similar vein, pieces of information do not need to be imbedded into the code, the computer just needs to know where to go and look for the information.

But what is possibly most interesting about the models that Sukhbaatar presented was the use of “attention” in neural networks.  With the previous example of going to the grocery store, certain words in that sentence are more important to remember than others, and Sukhbaatar has designed his models to be able to weigh variables differently, so that different pieces of information will be consider more essential to deducing an outcome.

While neural networks are never exact replicas of the human mind, the human mind and its processes can serve as a valuable model for teaching computers to make smarter predictions.  Neural networks are grounded in advances that have been made in the fields of neuroscience and philosophy; it is important to recognize that, once again, data science is highly dependent on interdisciplinary cooperation and research.  Hopefully humans can start to learn a bit from computers, but good luck getting red shirt Sam to calculate his next restaurant bill without a calculator.