Alicia Powers is the Senior Vice President of Research at the New York City Economic Development Corporation. She has a degree in Statistics and Cognitive Science from Rice University and a PhD in Public Policy and Management from Carnegie Mellon University.
What impresses you most about modern artificial intelligence, which is often modeled after cognitive processes, your field of study?
There have been great strides in the area of AI since I was first studied it a few decades ago when it was mostly a set of algorithms in a textbook. That being said, I think we may not be as far along as we think we are when it comes to artificial intelligence. There’s a level of potentiality and plasticity to the brain—an ability to react to fresh information—that I think computers are yet to replicate.
Sometimes I find myself in a scenario where I’m impressed with one form of machine learning but underwhelmed by another. I was listening to music the other day on a streaming service and thought that the music recommendations were curated wisely, but that the targeted ads in between songs had nothing to do with me.
So there’s room for improvement in AI. I’d say I’m most impressed by the advances in sensory and spatial awareness in machines. It wasn’t so long ago that we were unable to build a robot that could take a few steps without falling over, but they’ve since gotten much more sophisticated.
We use R. But most of my colleagues come in with degrees in economics or policy and not computer science, so they often don’t have much programming experience. So we utilize resources like DataCamp, and the team also holds R learning sessions. I teach base R but also tidyverse packages like
dplyr — I find using the
%>% pipe in your syntax makes it easier to follow the analysis.
I’ve been using R since I was a grad student in the early 2000s. But for personal projects I prefer Python—Jupyter notebooks have made it much easier to use libraries like
pandas for data analysis. Plus it makes me feel more like a software engineer! I can tell my programmer friends that I’ve been coding in Python.
I was at an edtech hackathon when an acquaintance introduced me to Noodle Education, founded by John Katzman of the Princeton Review, which was a young company trying to serve as a knowledge base for individuals trying to get into school at all levels. A few months later I joined their team.
I worked mostly on what could be considered a search engine for education. We wanted you to be able to search for schools and courses and degrees from kindergarten all the way through to the university level. We had moms searching for local preschools, 8th graders in New York City trying to find the right high school, and students in China trying to inform their college apps.
Much of my job was combining disparate data sources, often in the form of excel spreadsheets with pretty complex schema, into a single database. This role skewed more toward aggregation than analysis, so it was more of a data engineering position than a data science one. We went with MongoDB for our database needs, which at the time was just getting started.
As someone who’s experienced in edtech and worked in academia, would you recommend traditional education or modern techniques, like bootcamps, to aspiring data scientists?
I’m still in favor of the more traditional route if you can find a way to afford it. Everyone thought that MOOCs were going to completely disrupt academia and change the world…but no one ever finishes them! So I still think that the structure and network that in-person learning at a university can provide is worthwhile.