The Chancellor’s Undergraduate Research Fellowship was a positive experience which I am grateful for. Research in any field has layers of complexity that cannot be uncovered in the timespan of a single semester, yet I feel as if I have learned valuable lessons from my time spent in the lab over these past weeks. The idealized vision of research leaves out some of the key aspects of what actually occurs. Research in any field including Neuroscience requires problem-solving skills, but usually those problem-solving skills are applied to an unexpected complication. Most of the everyday work of research isn’t coming up with the next world altering idea; it’s about figuring out how to implement an idea. Coming up with experiments to run is not the difficult part of research. I learned this partially through my work on the data analysis side, yet a significant portion of this realization occurred while assisting other researchers implement their ideas in the lab slowly each day. In neuroscience, it can take weeks or even a few months to get a behavior animal ready for data collection. In this slow process of preparing, countless problems arise, and solving those unexpected problems is where the enjoyment of research lies.
Despite coming to the end of my CURF, I will continue to work on my project in the Runyan lab, and I hopefully will continue to expand my contributions to the lab as a whole. I have learned many valuable insights into my field throughout the semester, and I plan to continue to explore neuroscience through participating in current research. My advice to other undergraduate researchers is to get started. Waiting till you’ve had the proper classes will not always help because so much of the procedural aspects of research you can only learn by doing. Classes will help with understanding the broader meaning of the experiments that you are running, yet to learn how to conduct the experiments as well as cope with the problems that will undoubtedly arise, you must do it yourself.
We conduct experiments using mice as a model species. In the experiments the mice are trained on a sound-localization T-maze. A sound is played from 1 of 8 distinct locations, the mouse must decide if the sound is coming from the left or right and report its decision by turning the respective direction at the end of the maze. The figure below is the output of one of the programs that I wrote as part of my CURF. This figure demonstrates the accuracy of a single mouse throughout a session across several days (one session per day). The black dots represent the percent accuracy averaged over 15 trials, and the red line is the trend of the mouse’s performance throughout the session. This figure demonstrates the general performance of a specific mouse which allows us to compare accuracy across mice.


The following figure compares the accuracy of four mice across days by trial type (i.e. each of the 8 locations the sound could be played from). Each colored line is a single day, and the dark black line is the average across all days for a given mouse. The graph plots percent right turn on the y axis because conditions one through four represent signals telling the mouse to turn left (one being the easiest to discriminate and four is the hardest), so if the mouse was very accurate, it’s percent right turn for conditions one through four would be low. Conditions five through eight are cues for a right turn. Because conditions four and five are the hardest and one and eight are the easiest it is expected that the average performance should follow an S-shaped curve, indicating high performance at the easiest locations (1 and 8) and at chance performance (50% right turns) in at the harder locations (4 and 5) In the mice ej100 and ej2l1 we see this expected result, but in the other two mice, the trend is not as clean.
