Hi there everyone. This is Dr. Kurtis Cantley from the department of Electrical and Computer Engineering here at Boise State and I just wanted to record a short video to kind of give you an overview of my research group on both the technical and sort of professional side.
So my lab is called the ENDS lab – the electronic and neuromorphic device and systems lab and predominantly what we do is work on neuromorphic engineering which is building electronics that behave and learn and adapt like the brain does. So just give you a little bit of background about how the brain works, some of you may already know this, but there are trillions of little cells in our brain called neurons that actually transmit charge electrochemically throughout the brain and that’s what allows us to be able to think and process all kinds of sensory information from the environment.
These neurons have a whole bunch of little branching finger-like or tree-like structures called dendrites that actually sum up current coming from a whole bunch of other input neurons and then build up charge inside of the membrane and if they reach a certain threshold then they fire their own little action potential which is basically a voltage spike that transmits to a whole bunch of other neurons. So kind of goes down the line from one neuron to another and the connections between all of these neurons are called synapses. Basically they emit a current when they receive an action potential they emit a current into the postsynaptic cell and the amount of that current is proportional to what we refer to as the synaptic weight. Basically the synaptic weight is what allows us to be able to think and remember things and events and facts and synaptic weights change as we learn new things and experience new things. Basically the rules for those changes are complex but mostly they kind of depend on spike timing, the timing of action potentials between the neurons being connected, as well as firing rates. So that’s kind of the basic overview of most of what’s happening inside of the brain and how we can implement biologically inspired computing.
So in terms of why you want to build an electronic system that is like the brain, well the brain is extremely efficient, right? So my laptop here is using probably you know 100 or 200 watts of power depending on how much work the processor is doing. Our brains typically use on the order of about 10 watts of power. So it’s a huge couple orders of, you know order of magnitude or more difference in power consumption, with a big difference also in the amount of processing being done. The brain is taking in many sensory inputs all at once and sort of making sense of those in processing that data and making judgments and performing actions based on that. So the other thing about the brain is it’s incredibly defect tolerant so we lose neurons and synapses all the time. Neurons die and regrow, connections change, and yet our brain is still mostly able to do all of its normal cognitive function even when those neurons die, whereas a computer if you lose one transistor oftentimes you’re dead in the water, your processor won’t work. So those bottom things are really important. In terms of architecture how is the brain different? Well, our digital systems, you know, typically have components that connect to only a few other components. So an inverter may drive you know a couple other inverters or a few other gates of the mosfets, but in the brain each cell usually connects to thousands of other neurons so it’s this huge, hugely interconnected network of very simple processing units. It’s a fundamentally different architecture. So how we build these build machines that learn like the brain is you know in several different ways typically three main categories.
Number one, make software or write software that runs on a computer, on a traditional digital computer, that captures the brain. The only, the operational mechanisms of the brain. The only problem with that is you know, digital computers aren’t built to do that really so it’s incredibly inefficient and to do anything useful really requires a really powerful computer. A little bit farther down is sort of a hybrid approach where you can use customized hardware using sort of traditional devices like mosfets so CMOS circuits and run that software on customized hardware which makes it quite a bit more efficient. There’s a lot more you can do at that level when you have sort of custom designed the software or the hardware to fit the software. And then at the very bottom is sort of the use of non-traditional components such as memristors or memory transistors to actually capture sort of the physical use physics to capture the behavioral mechanisms of the brain. And so our group is kind of a little bit on both the bottom end of this where we’re using some non-traditional devices and also designing some custom hardware solutions around some of those devices.
So in terms of the learning rules you know one of the most interesting ones that I’ll just introduce real quick is called spike timing dependent plasticity. As I said, the changes in synaptic weights depend on spike timing. So let’s make a really simple three neuron network here where you have two input neurons feeding one output and I’ve circled the synapses in red. And let’s say these two input neurons generate action potentials that go toward the output and they don’t cause the output to spike. So in this case the synaptic weights actually weaken and the connections weaken because basically it’s like biology is saying well these input cells are not really correlated to the output and so we don’t think these connections are particularly important.
On the flip side of that let’s say you have multiple spikes coming from the inputs toward the output that causes the output to spike and so that will typically actually increase the weight just because the you know, biology thinks okay now we’ve got a causal relationship here and so we should strengthen these connections because these synapses and these connections are important. So how do we represent that in a circuit? Well basically you’ve got just input neurons and then synapses and since synapses are really just things that modulate the amount of charge being injected they can kind of be represented as resistors, or variable resistors, or memristors, something like that is a good way to go. The consequences of STDP in a network are really interesting. Basically, STDP allows you to encode information in spatiotemporal patterns and train a network to detect those spatial temporal patterns. So what I mean by a spatiotemporal pattern is one that you know, has when the neuron fires a certain pattern and another neuron fires a certain pattern sort of at the same time those patterns are different between the neurons but the same every time each neuron fires it. So in this case we’ve got a very small network of only about 25 neurons feeding into one output where we have defined some spatiotemporal pattern, right, in these 25 input neurons. So for the top trace, you know, neuron 25 its spike pattern is uh spike, spike, spike. You can see this in the shaded regions there where each of the patterns are presented and in between each of these pattern presentations it’s just random noise, okay? And what you do is you present this pattern just over and over to the network and through STDP it actually learns to recognize the pattern and the output will essentially only spike when the pattern is presented, okay. And so there’s a lot of evidence that this is really useful and actually occurring in the brain to make, to perform transformations on information representations and do encoding and it’s just a really interesting consequence of this fundamental STDP learning rule.
So the next thing I’ll talk about real quick is just you know, professional development within my group. You know one of the big things is I try to give students every opportunity to make their careers successful and be fulfilling in the long term after graduation and that’s through support and mentorship in every way possible not only from myself but from the broader community here at Boise State and in the department. And then of course building technical expertise, that’s a huge part of the PHD right, is you’ll get every opportunity to sort of pursue different areas of interest whether that be circuit design, or device fabrication, characterization, and sort of system level design, neural networks, machine learning, everything right, including even some cyber security. And then finally you know, growing your professional portfolio just giving you students opportunities to do the things that they feel will help them in the future whether that be communication skills, through doing some something like three minute thesis, or teaching a lecture, substitute teaching a lecture in a class, and you know being a TA or whatever. I think those kinds of things are really important you know for advancing and moving up the ladder in the future. Also of course part of the technical part is building a network of people that you know, and attending top conferences to present your research, and publishing your research, right?
So we get every opportunity to go to those conferences, meet new people from other universities and companies, and then potentially sometimes get internships at those companies if it’s of interest to the student. So that includes places like Micron. I’ve had a student at the US army research laboratory on semiconductor and Idaho National Lab. With that just wanted to say thanks for watching and if you have any questions about the group go ahead and send me an email or reach out and I’d be happy to talk to you. Thanks!