From May 2017

Blog Intro and Information

Welcome to “From the Director’s Desk” a blog about data science education and curriculum. If you are interested in receiving regular updates when new posts appear you can use the RSS feed link above, or subscribe to the google-group (read more for the link, you don’t need a gmail-account to subscribe!). You can find a bit more about me, Karl Schmitt on the about page. If you are looking for full degree curriculum development materials I’ve created a resource page and tracked posts with a Program Development category. Individual course materials are tracked either generally with the “Course Development” category, or individually by each course the post relates to. Please feel free to email me or leave comments if you have questions, thoughts or something to share!

The original blog introduction, with a bit of why the blog exists and what it seeks to cover is here.

Math (Courses) for Data Science

I want to share some thoughts on the math required for a data scientist (or at least, a data science undergraduate degree). The discussion can really be boiled down to one question: “Discrete Mathematics or Calculus 2?” Let’s first take a look at the outcomes from an in-progress and two completed working groups on outlining data science education.

An ACM organized workshop in 2015 included participants from ACM, ASA, IEEE-CS, AMS and more. That workshop’s report does not explicitly state any math requirements, but does make clear the need for sufficient supporting statistics courses. The clearest recommendations come from a group of faculty at the Park City Mathematical Institute in the summer of 2016. Their report gives suggestions on how to make a data science degree from existing courses and ideas for new integrated courses (this is the real gold in the report). If constructing a curriculum from existing courses, the group recommends three mathematics courses: Calculus 1, 2, and Linear Algebra. Last, a series of round-table discussions is currently underway by the National Academy of Science to discuss Data Science Post-Secondary Education. While all three NAS round tables are interesting, only the first is relevant to this discussion. At that meeting, there was a presentation on the underlying mathematics of data science. Their list of mathematics supporting data science included linear algebra, numerical analysis and graph theory.

In summary, all three groups clearly support the need for linear algebra to be a part of any data science curriculum. I doubt you’ll find many objections to this idea since linear algebra forms the mathematical foundation for manipulating data contained in tables or arrays as rows/columns. If nothing else, simply learning the notation is vitally important for anyone wanting to extend algorithms for data science. All three also clearly support at least two traditional statistics courses, up through regression analysis. A little less clearly, I would argue that all three support the requirement of a Calc 1 course. The NAS round-table discussed needing numerical analysis, which is traditionally based on calculus concepts. The ACM workshop supported disciplinary knowledge and just about all science disciplines require at least one semester of calculus.

Alright, on to the differences. The PCMI group included Calculus 2 in their “minimum” courses needed for data science. In my opinion, the suggestion that Calc-2 be included in the bare minimum courses for data science is indicative of the dominance of mathematicians (many applied) and statisticians in the group (there were a FEW computer scientists). While I think overall they are quite good, I think the inclusion of Calc 2 over discrete mathematics (as well as the odd location of data mining) clearly reflect this make-up. The presentation on mathematics (from two mathematicians) at the first NAS however included graph theory as one of the three main supporting mathematical areas. So, perhaps the question from these two groups is: “Calculus 2 or Discrete Mathematics?”

Here’s an alternative way to build an answer to this question. Instead of just focusing on the topics covered, what about the requirements for the other supporting disciplines that make up data science? Computer Science is pretty easy. Almost all programs require Calculus 1 and discrete mathematics, and the ACM 2013 guidelines include a list of core topics (set theory, graph theory and logic) that are traditionally covered in either a discrete mathematics course, or a combination of several mathematics courses. They also articulate very clearly that for some areas of computer science (like visualization or data science) linear algebra and statistics will be required. We can contrast this with typical mathematics requirements for statistics curriculum. For many statistics programs, a minimum of Calc 2 is required to support advanced probability courses (with a preference for multivariable calculus). The ASA 2014 guidelines specify that statistics majors should have both differentiation and integration (typically covered by Calc 1 and 2), and linear algebra.

Development from supporting disciplines can leave us just as confused as to what to require. I think there is an answer, but it requires taking off the mathematician glasses, and thinking about jobs, applications, and where a student might be headed. First, a good portion of researchers and practitioners doing data science use graphs and networks, often doing mining on those graphs for information. Turns out graphs (the node/edge type, not the line/bar plot type) are also a great way to visualize a lot of information. Another key skill when doing data science is the ability to partition data. That is, to think of data as either meeting, or not meeting specific criteria. This is encompassed in set theory in mathematics, and is sometimes partially covered as part of logic. Together these topics provide two new ways of thinking about data that aren’t included in other mathematics courses. The need for this sort of knowledge, and a basic introduction to proofs is why discrete mathematics courses came into existence, to allow CS majors to get these topics without taking another 3 or 4 mathematics courses. To me, this is a far stronger case for including discrete mathematics than the (possible) need of Calculus 2 for advanced statistics courses. If you are requiring 4 math courses, by all means, include Calculus 2 next. Or, if a student is particularly interested in understanding the theoretical underpinnings of data science (by taking more statistics courses) then they should take Calc 2. But if we are really thinking about an undergraduate degree as a stand-alone, prepared to enter the work force degree, Calc 2 does not seem to add a lot of direct value to the student’s degree.

Intro to Data Science Books

Friday morning I had the opportunity to chat with Kathryn (Katie) Kinnaird, currently an Applied Mathematics Post-Doc at Brown University, and former director of the Data Science TRAIn Lab at Macalester College. Originally I had called to ask her about the methodology behind the TRAIn lab and materials related to it (it’s a neat approach and something I may adopt some for my own research students) but we got to talking about the introduction to data science courses we taught this year and the textbooks we used. As a bit of a preview, I used two textbooks, Doing Data Science by Cathy O’Neil and Rachel Schutt, and Data Science from Scratch by Joel Grus. Katie used Analyzing Data with GraphPad Prism by Harvey Motulsky. I’m going to provide a short discussion of what I was looking for in my textbooks(s), what I ended up getting, and ask for a bit of reader input.

Criteria:

When I was reviewing textbooks for Data 151, I had some fairly specific constraints in mind. First, I wanted a book that dealt with data science, not just data mining, machine learning, or statistics. That ruled out a lot of books, but there were still a few left to pick from. I also wanted a book that used either Python or R as its primary coding language, since I believe (based on conferences and literature) that these will be the languages of the Realm long-term (more on this in another post). This excluded a few more books, including the book Katie used since it focused on the Prism software package. It also excluded the textbook I previously used for a data mining class, Data Mining: Practical Machine Learning Tools and Techniques  by Witten, Frank, Hall, and Pal which uses Weka.

Alright, so general data science, R or Python. The last requirement I had was that the book didn’t assume the reader knew a lot (or any) coding. Wait…no coding experience? Yep. Data 151 is targeted at freshman and interdisciplinary students who may well have zero coding experience. Katie’s class was even less likely to have coding experience (she had a fall class instead of my spring class). There are a couple of books out there that seem to be targeted at upper-tier undergraduates and early graduate students, for example Introduction to Data Science by Ingul and Segui, but very few that are technical while still aimed at novices. That’s how I landed on Doing Data Science and Data Science from Scratch. They were basically the only books I could find that even sort of fit my criteria. And even Doing Data Science had a bit of an assumption of some programming. So how did they work out?

Doing Data Science – Straight Talk from the Frontline:

I really like the description that becomingadatascientist.com used (read the whole review here):
“To make a metaphor, Rachel Schutt and Cathy O’Neil tell you about a great dish someone cooked, and give some general info about the process of making the dish, and what to watch out for when you attempt it yourself. They even include some quotes from the chef about the art of making this particular dish, and tips on preparing and presenting it.”

It’s true, they cover everything you might want to touch on in an intro class and generally do so in a very high-level, newbie friendly way. There’s a few chapters that get way to technical (for my purposes), but those can be glossed over. So, what’s the down-side? Sadly, something written in 2013 has every potential to be out of date in the data science world unless very carefully written. Read carefully,  they make several comments that I don’t feel are entirely true any more. Second, the code in the book is all in R, and I had really planned to exclusively use Python. Third, while I wanted the book to not assume programming knowledge, I also didn’t want it to ignore acquiring (some) programming knowledge (it does). Last, while generally accessible, it is still clearly written for graduate students or independent learners. That is to say, not really for brand new students who aren’t comfortable pulling out the key information they need. It also was not written as a textbook-per-se. It doesn’t really have any true exercises included (a few are scattered throughout from the class the book was based on).

Data Science from Scratch – First Principles with Python:

Great, first principles, using Python, everything I wanted right? Yep, it was, if I were only teaching data science majors or computer science majors. I could probably get away with no programming background (though probably not or so my business students said). Really, if you are looking at implementing an Intro to Data Science course as a sophomore/junior level elective in a computer science program, or possibly a Math/Stat program with a programming prerequisite, this is a reasonable choice. It covers the minimum statistics and mathematics you need (including linear algebra!) and doesn’t really assume you know tons of Python.

So what’s wrong with this text? For a general course, its simply too technical. While it does cover several important ideas at a high level, the students lost those few comments in the nitty-gritty of code implementation and examples. Furthermore, while I really liked the idea behind the text, that you will implement everything you need to do data science from scratch (or from things earlier in the book) that turned out to actually felt like a giant step backwards. There are quite a few fantastic packages for data science, from the really well known like Pandas and Sci-Kit learn to lesser known like Bokeh. I understand wanting students to know from the ground up how things work, but you can understand things while still using well-tested packages (and avoid all your own errors).

Something New?

So, bringing it back around to my conversation with Katie on Friday. She had also expressed frustration with her textbook in terms of reaching the intended audience (freshman, new students). I wasn’t happy in the end with either textbook. Next year I’ll probably cut Data Science from Scratch as it really didn’t do what I needed. A lot of the other intro courses I know about don’t even use a textbook. So, here’s a few questions I’d like to pose to my readers:

  • Are there other textbooks that you are using, or know of that fit these needs?
  • What are your “must-haves” or “must-avoids” for a textbook for freshman-targeted courses in data science?

Student Learning Objectives – Part 4

This post is part of a series on student learning objectives (SLO’s) for both curriculum and courses. The SLO’s in this post are course level, specifically topical objectives for an “Introduction to Data Science” (Data 151) class for new students. Love them or hate them, student learning objectives are a part of higher education (I for one appreciate how they provide focus for curriculum and courses).

The last post focused on high-level learning objectives for the course “Introduction to Data Science” (I’ve repeated them below for reference). Those are certainly the big picture, but those four objectives are hardly enough to really design day-to-day lessons around. Data 151 also has seven topical objectives tied directly to those general objectives and modeled after Paul Anderson’s DISC 101 course objectives. I’ll tie each topical objective back to the course’s overall goals.

General Course Objectives:

A. Students understand the fundamental concepts of data science and knowledge discovery
B. Students can apply and perform the basic algorithmic and computational tasks for data science
C. Students develop and improve analytical thinking for problem formulation and solution validation, especially using technology
D. Students prepare for success in a world overflowing with data.

Topical Objectives:

  1. gain an overview of the field of knowledge discovery (A)
  2. learn introductory and state-of-the-art data mining algorithms (A,B)
  3. be able to distinguish and translate between data, information, and knowledge (A, C)
  4. apply algorithms for inductive and deductive reasoning (B,C)
  5. apply information filtering and validation on real world datasets (B,C)
  6. understand the social, ethical, and legal issues of informatics and data science (A,D)
  7. apply data mining, statistical inference, and machine learning algorithms to a variety of datasets including text, image, biological, and health (B,D)

Four of the topical objectives (1,2, 3 & 6) focus on guiding students towards understanding the fundamental concepts behind data science. One can hardly call a course an “introduction” without giving an overall picture of the field (Obj. 1) or spending time understanding key tools that practitioners use (Obj. 2). While I fully anticipate that the state-of-the-art algorithms will change, the basics like k-Nearest Neighbor, k-Means, and Decision Trees will certainly not. These algorithms provide a nice gateway into understanding the ideas of learning from a collection of data (Obj. A).

We also know in data science that what you can learn from a data-set is limited by the quality of the input data (like a lot of other things in life, garbage-in = garbage-out). Objectives 5 & 7 articulate the sorts of data that will be used in the course, both real-world data and a mix of prepared/known data sets. These data sets provide a way to actually practice Objectives 2 & 4 in more than just an abstract way. I want students to walk away from this class knowing how practitioners actually make use of algorithms. Students need to get their hands dirty putting some of those algorithms to work (Obj. B/C).

Now, I think it’s important to note here that in their projects and general work, I’m not expecting a really deep understanding or application of the algorithms. That’s saved for two later courses, one explicitly on data mining and the other their capstone sequence. In Data 151 they should be learning enough to continue learning on their own, understand and interact with people who are really doing this work, and to grasp how the ideas can and are shaping the evolution of various disciplines or industries.

While Objectives 2, 4 & 5 articulate using data science skills, Objectives 2-5 have a second layer as well. These objectives aim to have students think about the implications and knowledge that comes from the data science process. This course is about more than just data engineering or data mining, it’s really about the questions and, well, science that can done with data. It is only when students can understand the processes of both inductive and deductive reasoning for science, or transform raw data into actionable knowledge that they become aware of the true power of the field (Obj. B/C).

Last, but certainly not least, Objective 6. As we know from Spider-Man (and some other great speeches), “With great power comes great responsibilities.” If you believe, like I do, that data science could dramatically change what we know and how industries and society is run… then I hope you are also a little nervous, perhaps occasionally terrified. Because if we DON’T talk about the social, ethical, and legal issues surrounding informatics and data science we might well end up with something like Ultron (the artificial intelligence gone bad in Marvel’s “Avengers: Age of Ultron”). More likely, we’ll end up with biased learning algorithms that perpetuate injustices or inequality. Making sure students have at least started to think about these sorts of issues may not prevent them from happening, but it is one (in my mind necessary) step towards that goal (Obj. D).

Together this is a pretty hefty set of things to accomplish in a semester. All in all though, I think they serve as a great lead into the entire field, and the overall goals of Valpo’s Data Science program (described in previous posts). Even if a student only takes Data 151 (as some certainly will), they will leave with a broad understanding of the field, enough knowledge to interact successfully with experts, and enough insight to see the real value that the effective and intelligent use of data can provide. I hope my business students are now prepared to be the “data-savvy business managers” that McKinsey & Co. described a few years ago and that the rest (C.S., Math and Stats) can work with, or become true data scientists, engineers, or creators.

Student Learning Objectives – Part 3

This post is part of a series on student learning objectives (SLO’s) both for curriculum and courses. The SLO’s in this post are course level, specifically for an “Introduction to Data Science” (Data 151) class for new students. Love them or hate them, student learning objectives are a part of higher education (I for one appreciate how they provide focus for curriculum and courses).

In many ways, the general course SLO’s for Data 151 mirror the SLO’s for the program as a whole. Students need to leave with an understanding of what data science is, know about the basic algorithms, and be made aware of the ethic and moral issues surrounding the use of data. Data 151 is intended to be a hook that draws in students from across our university to learn about data and then consider adding a major in Data Science. It also draws in juniors and seniors in less technical disciplines like business. This  may in turn make Data 151 the only course where a student explicitly thinks about data. The major difference between the curricular and course SLO’s is the depth of understanding I expect students to leave the course with (as opposed to the program). This is most clear in the first two SLO’s below.

  1. Students understand the fundamental concepts of data science and knowledge discovery
  2. Students can apply and perform the basic algorithmic and computational tasks for data science

As said, these are very close to the first two SLO’s for the whole curriculum and related to both their ability to communicate data science concepts and also their ability to implement solutions, though in both cases with lower levels of expertise. Data 151 has two additional SLO’s that target the broader (potential) audience for the course (in addition to continuing majors). These are:

3. Students develop and improve analytical thinking for problem formulation and solution validation, especially using technology
4. Students prepare for success in a world overflowing with data.

In many cases, students in Intro to Data Science are still gaining experience (aren’t we all?) with general problem solving skills. Perhaps (to my mind) one of the most under-taught skills in STEM courses is how to actually formulate and structure the process of solving a problem. In many, many cases, a significant amount of time can be saved in the execution of problem solving by carefully planning out how you are going to explore or solve a problem. Data science even has this explicitly built into several locations in a typical workflow, specifically performing exploratory data analysis and planning for solution validation.

Meanwhile, the final objective is meant to really be a catch-all. The field of data science is changing incredibly rapidly, as are the ways data is generated and used. I wanted Data 151 to be something that is capable of covering current, bleeding-edge topics. This SLO also nicely encompasses my plans to bring in alumni and current practitioners as speakers to give the students insight into what future jobs might look like. Bringing in these speakers also provides a chance for students to get an industry perspective on workflows and processes, something that can be very different from academia’s problem solving process.

These SLO’s are pretty high-level, but intentionally so. At Valpo, we’ve got both “course objectives” and also topical objectives. My next post will take a look at the specific, topical objectives for Data 151, which deal with the more nitty-gritty topics of what will actually get covered in Data 151.