By Karl Schmitt

Python or R?

This week I want to discuss a potentially divisive issue, should a program (or course etc) be taught in Python or R. I think a reasonable case could be made for teaching either language. Pragmatically, if you want your program’s graduates to be truly competitive for the largest variety of jobs in the current market students need to at least be familiar with both (and possibly SAS or SPSS). There is already a lot of information and blog posts addressing this question and I’ve provided links to a few of my favorites at the end of this post. Rather than re-hashing those post’s pro’s and con’s I’m going to focus on aspects of each language related to teaching (and learning).

Before considering each language, I want to frame the discussion by (re)stating a program level student learning objective (SLO). In my first post about SLO’s objective 2 states: “Students will be able to implement solutions to mathematical and analytical questions in language(s) and tools appropriate for computer-based solutions, and do so with awareness of performance and design considerations“. Based on this objective, I’ll state three specific objectives for selecting a programming language:

  • A language which can implement (complete) solutions to data science questions
  • A language which allows good programming practices in terms of design
  • A language which allows implementation of solutions which can be improved/aware of performance issues

Why Choose R?

As a programming language that originated in academia, particularly within the statistics community, R seems like a very natural choice in terms of teaching data science. Much of the syntax, function naming and even thoughts about how to construct a data pipeline/workflow comes naturally from a statistical analysis perspective. This makes it very easy to convert knowledge of statistical processes into code an analysis within R. The easy conversion between notation and code becomes even more valuable when trying to work with advanced/obscure statistical techniques. With R’s origination in academic statistics, there is a much broader range of packages for uncommon techniques than in most other languages. This suggests a strong candidate for the first requirement when working in statistical domains.

Other software/packages that make R appealing to teach with are RStudio, Jupyter Notebooks and R Markdown. RStudio provides a clean, user-friendly interface for R that makes interacting with plots and data easy. It even aids the transition from spreadsheet software (like Excel) by providing a similar, GUI-driven interaction with (simple) data-frames. With Jupyter Notebooks’ recent addition of an R kernel option, it is also easy to transition from mathematics focused software like Maple and Mathematica. See this DataCamp blog-post for more information on using Jupyter Notebooks (or similar software) with R. Notebooks also facilitate teaching good practices such as code-blocks and code annotation. Finally, R Markdown provides a (reasonably) simple way to convert executable code directly into final reports/outputs. That functionality further supports the teaching of (some) good programming and design practices.

Why Choose Python?

Python was originally developed to be an easy to learn programming language (see Wikipedia’s history on Python). This means the whole language’s syntax and styling is easier to learn from scratch than most other languages (notably R). The basic Python data structure of lists naturally works like mathematical sets while dictionaries closely match logical constructions for unstructured data. Together with the use of indentation to indicate control flow, it is natural to when doing any introduction to the language, how to make Python code (human) readable. These traits speak directly to teaching/achieving our second language-related objective, “allows good programming practices/design”.

For teaching, Python starts with many of the same advantages as R. There is a long standing Python kernel for Jupyter Notebooks and several markdown packages available for turning code directly into html-styled reports. What makes Python noticeably different from R is that it is a general purpose programming language. In terms of teaching, this opens up some interesting options related to the first and third goals above. In terms of developing solutions to data science problems, Python easily allows a very broad range of both input and output. Specifically, it has high quality packages designed to deal with streaming data and better techniques for unstructured or big data. Also, because Python is regularly used to develop full programs and deployed software solutions, the methods available to study and improve performance are already well developed.

 

But What are People Actually Using?

There are way, way more Python users than R users (and probably will be for the foreseeable future) simply because Python is a general purpose programming language. However, we are more concerned with users within the data science communities. That focus however doesn’t make the answer to our question any more clear. 2016 Data from O’Reilly’s Data Science Salary Survey places R (57%) slightly ahead of Python (54%) which matches KDnugget’s rankings of R being slightly ahead in 2016. However, the 2017 KDNugget survey results now places Python slightly ahead. Burtch Works’ 2017 survey data however still has R significantly ahead, and in-fact still gives a very large market share to SAS which didn’t even make KDnugget’s list. But Burtch also notes that Python has been gaining shares each year. Remember when considering these results however, that these are all self-reported and self-selecting surveys! It is hard to tell if these changes are actual changes in use, or just a changing definition/reach of who’s responding to the surveys. For example, when Burtch Works breaks down their results at least one sub-group rarely used SAS and, similar to O’Reilly and KDnugget, had Python ahead. More and more people are identifying with doing data science each year, but many of them have been doing similar things for a long time.

Some Undisguised Opinions

There is obviously value in either programming language, but from my perspective there is a really strong winner in Python. From a curriculum/planning perspective, since Python is a general-purpose language it is entirely feasible to have standard, introductory programming courses from a computer science department taught in Python. This reduces (potentially wasteful) duplication of similar courses (does every discipline really need its own intro programming?). It also lets computer scientists take advantage of years of educational research into how to better teach programming! Not to mention that Python was intentionally designed to be easier to learn programming in.

Add to this that data science students don’t really experience any major disadvantages from having Python as the primary curricular language but do gain several benefits. Key benefits include longer-term skill viability and increased versatility in job options, etc. This versatility even plays out when considering including advanced CS courses in a data science curriculum. Most data science curriculums are already going to struggle to incorporate all the necessary foundational skills in a reasonable length undergraduate (or graduate) program. So why add programming courses beyond those already needed to meet typical CS prerequisites?

Finally, looking at the trends in language/tool use in data science just adds more validation to this idea. As companies move to working with unstructured or streaming data, Python becomes even more natural. All the surveys report increasing use of Python, without any signs of slowing down that increase. It is important for academic programs to not just react, but even anticipate trends and needs in the job market and industry.

Additional Resources

While I didn’t go into lots of details on the pro’s and con’s of R or Python (and didn’t even talk about SAS/SPSS) I have collected a few links that you might find valuable to read in making your own decision.

R vs. Python for Data Science: Summary of Modern Advances — EliteDataScience Dec 2016 — Does a nice job of highlighting the new things that make the languages pretty equal.

 

Python & R vs. SPSS & SAS — The Analytics Lab  – 2017 — This is nice because it also puts into perspective how SPSS and SAS play into the landscape as well as provides additional historic perspectives

Python vs. R: The battle for data scientist mind share — InfoWorld, 2017 — a fairly balanced perspective on the value of both

R vs. Python for Data Science — KDNuggets 2015 — A bit dated, but still provides some good comparisons.

(Other) Official Curriculum Guides

Last week I discussed several places from which you could pull curriculum planning materials. This week will continue that theme, but with a bit more of an ‘official’ flavor, by discussing several professional societies’ curricular guides. While there is no (clear) leading data science professional society (and none with curricular guidelines to my knowledge), there are a few closely related societies with official guidelines. Depending on what path you took into data science, you may be more or less familiar with the following societies: Association of Computing Machinery (ACM), Institute of Electrical and Electronics Engineers (IEEE), Mathematical Association of America (MAA), and the American Statistical Association (ASA), . There are several other societies relevant to data science, but not as vital in terms of official curricular guidelines (SIAM, INFORMS, AMS, ASEE). All four major societies (ACM, IEEE, MAA, and ASA) have released curricular guidelines relevant to data science. This post will give a very high level overview of those guidelines and why you might care about what’s in them.

ACM and IEEE jointly released Curriculum Guidelines for Undergraduate Programs in Computer Science in 2013 (CS2013). The most valuable component of CS2013 for me is the specification of ‘Knowledge Areas’ that are obviously related to Data Science, and being able to see the professional community’s consensus on central learning objectives in these areas. Some clearly important/relevant areas are:

  • Computational Science
  • Discrete Structures
  • Graphics and Visualization
  • Information Management
  • Parallel and Distributed Computing

Other areas such as Algorithms and Complexity, Information Assurance and Security, or Programming Languages probably include specific learning objectives that are relevant to data science, but may not be needed in their entirety. Additionally, CS2013 allows you to to examine the suggested course hours expected to be devoted to these topics. From an industry perspective, this can provide valuable insight into whether a data scientist or computer scientist might be more knowledgeable about a particular subject. This differentiation in knowledge is important as data science strives to define itself independently of its founding disciplines. If you are interested in throwing your net a bit wider, ACM also has guides for other programs like Computer Engineering and Information Technology (coming in 2017) on their guidelines site.

The MAA’s 2015 Committee on the Undergraduate Programs (CUPM) in Mathematics Curriculum Guide to Majors in the Mathematical Sciences — CUPM Guide for short — can serve in largely the same way the CS2013 guide does, but from a mathematical/statistical approach. With more detailed reports on Applied Mathematics, Computational Science, Operations Research, and other areas of mathematics that data science often operates in, the CUPM Guide makes it possible to understand what exactly (from a mathematician’s or computational mathematician’s perspective) are the most relevant areas of mathematics to understand for success. This guide can also serve to help clarify exactly what sorts of mathematics courses a data science curriculum should require, by explaining where in the course structure specific topics like sets, relations, and functions, or other ideas get covered. In addition to their extensive undergraduate guide the MAA also provides a lot of interesting materials related to masters/Ph.D preparation, etc. These might be particular interesting as you consider what sorts of students to recruit or include in a master’s program.

Finally, the ASA has perhaps the most relevant and diverse, but in many ways least detailed, set of curriculum guides. The set of undergraduate guidelines and reports include how to assess instruction, program guidelines for statistical sciences, and even the Park 2016 Data Science guidelines (which I have commented on in other posts). They also have two sets of graduate guidelines from 2009 and 2012 for statistics masters/Ph.D. programs. What the ASA guidelines provide are much bigger, sweeping statements about the sorts of skills and knowledge that a statistics major should have. It includes side notes that give more details such as encouraged programming languages and even file formats. In many ways, I think the majority of the ASA guidelines could just replace “Statistics Major” with “Data Science Major” and remain nearly as applicable. The biggest difference might be in the level/depth required in “Statistical Methods and Theory” (less) and “Data Manipulation and Computation” (more). In a sense, this is at the heart of many statistician’s argument that “Data Science” isn’t really its own field. In practice though, I think the final implementation and mindset behinds a statistics major and a data science major will be very different, and certainly heavily influenced by the ‘host’ department.

That covers the breadth of the major professional societies’ curricular recommendations. I wasn’t able to find any (official) guidelines for a “business analytics” major from a professional society (see my resource page for a few unofficial documents), so if you know of one, please let me know.

Course/Curriculum Resource Sites

Last week I posted about specific websites you might use to host or pull assignments from. This week I want to take a broader look at overall curriculum design. This is by no means a comprehensive posting of sites that have curriculum available, instead it’s intended to help reduce your search time for this kind of material.

If you are looking to find wholesale curriculums, including course materials, there are a few options available to start the creative juices flowing. The first, and probably most academic, is the European Data Science Academy (EDSA). The EDSA is grant funded with a large number of academic (university) and research institute partners from across Europe. The thing I like best about this work is that they started with a demand analysis study of the skills needed and current jobs in data science across the EU. Furthermore, from the start the project built in a feedback and revision cycle to improve and enhance the topics, delivery, etc. To understand their vision, see the image below.

This idea of continual improvement was more than just a grant seeking ploy as shown by their list of releases, revisions, and project deliverables. While the current site still lists four learning modules as unreleased, they are expected July 2017.

Overall, their curriculum structure (I haven’t evaluated their deeper content) has a fairly high emphasis on computational topics, with less statistics/mathematical underpinnings. You can experience their curriculum directly (it’s free/open access) through their online course portal. What might be far more valuable though is their actual grant’s deliverables. These deliverables include details on the overall design principles in their structure with learning objectives, individual courses with their own learning objectives, descriptions of lesson topics/content and more. Using their outlines and ideas to guide your own construction of a curriculum is both reasonable and a great way to make sure you aren’t missing any major, important topic, however, this should be done with proper attribution and license checking (of course).

The other two places to look for curricular inspiration are also in the ‘open source’ category, but not funded by grants or (traditional) academic institutions. The Open Source Data Science Masters was constructed by Clare Corthell, who has gone on to found his own data science consulting firm and other initiatives. While not every link on the site is actually to a free resource (there’s several books to buy etc), it does a pretty nice job of highlighting the topics that will need to be covered (if possible), and provides lots of places to start pulling course materials from (or getting inspiration/ideas for content). The primary curriculum is python focused, however he also has a collection of R resources.

Corthell isn’t the only one though with an “open source” or “free” data science (masters) degree. Another collection of relatively similar material was collected by David Venturi, who’s now a content developer at Udacity (writing data science curriculum of course). For those designing curriculums, both Corthell and Venturi provide excellent resources and places to frame your learning. However if you hit this page trying to get into data science, read this Quora post that I think accurately highlights the challenges of learning from/with these open source programs.

Another similar alternative, that I’d peg closer to an undergraduate degree, is the Open Source Society University‘s data science curriculum. Their curriculum assumes a lot less pre-knowledge in mathematics and statistics, providing links for Calculus, Intro Statistics, etc. This content is probably more in-line with the recommendations for curriculum from the Park’s paper (see my Curriculum Resources page). What I particularly like about this (from a learning perspective) is that it actually details the amount of work per week required to learn from each course. You’ll see a large repetition of topics, but the OSS-Univ’s curriculum has a lot less advanced material, with only a few courses in big data, wrangling, etc.

At the end of the day, if you are looking to implement an undergraduate or graduate degree in data science, your university is going to have to offer duplicates of a significant subset of classes from these curriculums. While emulation might be the highest form of praise, we’ll each need our own, unique take on these courses while striving for sufficient similarity to have a semi-standardized knowledge base for practitioners. Good luck!

 

Intro to DS Assignment Sites

As an instructor, I want to provide high-quality assignments that are focused (so they achieve the learning objective), engaging (so they aren’t bored), and well supported (so they don’t end up frustrated). In an ideal world, I’d have time to write, test, debug, and administer all my own, course-tailored assignments that meet these goals. I, however, do not live in an ideal world, nor have enough graduate/undergraduate minions to mimic this ideal world. Instead, I’ve turned to using a few sites that already host assignments, resources, and even include auto-grading (without me needing to learn/setup the system).

Learn2Mine (L2M) is the first site I used in conjunction with my Data Mining course, and more recently my Introduction to Data Science course. Learn2Mine is a free, open source platform developed at the College of Charleston (CoC). While I have only really made use of the contents already there and CoC’s hosted site, you can contribute, or host your own version by getting the source directly from github. Dr. Anderson is fairly responsive about keeping the site running and grading.

The positive features for L2M (beyond being totally free/open source) are that it includes a mix of both introductory programming assignments and several more advanced machine learning/data mining lessons. It even has several search algorithm lessons (which I tend not to use). All of the lessons include auto-graded response boxes which also provide limited feedback of the errors generated when comparing submitted work to answers. There is also an interface for instructors to create their own ‘courses’ which consist of a series of the lessons on L2M. This allows the instructor to see student progress through lessons and download a grade-book in spreadsheet format.

Downsides for L2M are in-line with what you pay for (or invest in time-wise). Even though there is feedback when students get answers wrong, this often just consists of the identification of mismatched output lines (so pretty sparse). Students often get very frustrated trying to figure out what they are missing. This is exacerbated by the fact that often the instructions are unclear or insufficient to allow students to simply do the lessons. Also, as might be expected from a locally built/maintained project, there are a lot of “polish” features missing, such as being able to reorder assignments in a course, or associate a name with an account. Students have an account associated with the email they login with so it can sometimes be challenging to connect records with students. Overall, I’ve been considering phasing L2M out of my normal assignment structure, though the possibility of hosting my own local version and implementing different, more explained lessons has also been tempting.

The prime contender to replace L2M for me has been DataCamp. I’ve know about DataCamp for a while now but had the first chance to actually use it and make assignments from it this spring when I was looking for data visualization lessons (see visualization resources post). I’ve gone through a few lessons myself and found DataCamp to basically be exactly what I’d want/envision online course-work to be. Most courses consist of short videos (a best practice) followed by several guided coding exercises. DataCamp is not (sort of) free, which turns out to be a pro and a con.

If it’s not free, why is DataCamp going to replace L2M for me? Great question. Because, for academic purposes, Datacamp IS free. If you are an instructor for an academic institution teaching a course with 10+ students in, you can request free, premium access for students enrolled in your course(s). That access is limited (they give you 6 months), but hey, it’s free. What else makes DataCamp a nicer replacement? First the coding exercises are scaffolded, that is, early exercises have more prewritten code while later exercises require you to remember and use what you’ve already learned. In addition, the coding exercises have reasonably helpful error messages and help often allowing you to more accurately debug code. They’ve also got built in hints/help available, so a student can’t get permanently stuck. Using those however decreases the “exp” they gain, so you can still track how success a student has been without help. The other major advantage is that DataCamp has a SIGNIFICANTLY larger set of lessons/courses available to pull from.

There is no free lunch in data/computer science though. DataCamp does have a few downsides. Perhaps the biggest is the granularity available in assignments. You have three choices, “collect xp”, “complete chapter”, or “complete course”. Given that a chapter is really the smallest cohesive learning unit on DataCamp, this makes a lot of sense educationally. However, that also means it’s not exactly an alternative for giving individual lab/homework assignments. Instead, it would serve best as a resource/major assignment related to learning how to program in python/r, or a bigger topic.

Finally, I want to mention Gradescope. Gradescope isn’t data science educational site. Instead it’s a jack-of-all trades which can help ease the burden of assignments and grading. If DataCamp took L2M and removed granularity/options, Gradescope (in this context) goes the other direction. Lots of faculty use it for all kinds of courses, from computer science or mathematics to writing. Given its purpose, Gradescope doesn’t have any specific assignments (maybe that was obvious). Instead, it can serve as an autograder or collection site for your assignments. I’ve included it here for those that might already have assignments (or who get them from others) but still want a speedy, simple way to get feedback to students.

I’d be remiss if I didn’t point out that there are some alternatives to DataCamp, depending on your goals. If all you need students to do is learn to program (not necessarily in a data-centric style) try Codecademy or explore Code.org. I also know there is an alternative to Gradescope (but I couldn’t track down the name/site if someone knows, please email me or leave a comment). What I recall is that the alternative is NOT free, but does provide better support and scaling. You might also consider what options are available or integratable with your learning management system (DataCamp IS…but maybe not by you..).

Hopefully you found this post informative, if you’ve got other suggestions of websites with assignments (particularly data-science related) please let me know or leave a comment.

 

Version Control and Reproducible Research/Data Science

A current hot-topic in research, especially within statistically driven or based research is “reproducible research”. In academia, the process of peer-review publication is meant to assure that any finding are reproducible by other scientists. But those of us in the trenches, and especially on the data-side of things know that is a theoretical outcome (the reproduciblity) and far more rarely something tested. While academia is rightly under fire for this lack of actual, reproducible research (see this great example from epidemiology) this is even more of a problem in industry. If the analysis can’t be reproduced, then it can’t be applied to new client base.

So why bring this up on a educational blog? I think its important to embed the idea of reproducible work deep inside our teaching and assignment practices. While the idea of repeating a specific analysis once the data has changed isn’t really novel, it becomes far more relevant when we begin talking about filtering or cleaning the input data. Just think about searching for outliers in a data-set. First, we might plot a histogram of values/categories, then we go back, remove the data points that we want ignored, and replot the histogram. BAM! The we have a perfect opportunity to teach the value of reproducible work! We used exactly the same visualization technique (a histogram), on practically the same data (with outliers and without outliers).

Where does the reproduction of the work fit in though? Python and R both have histogram functions, so this is definitely a toy example (but the whole idea of functions can serve to emphasize the idea of reproducible/reusable work). Instead, I think this is where the instructor has an opportunity. This idea of cleaning outliers could easily be demonstrated in the command line window of R or an interactive Python shell. And then you’ve lost your teaching moment. Instead, if this is embedded in an R script or Python/R Notebook you can reuse the code, retrace whatever removal process you used, etc. In the courses I’ve taught, I’ve seen student after student complete these sorts of tasks in the command-line window, especially when told to do so as part of an active, in-class demo. But they never move the code into a script so when they are left to their own devices they flounder and have to go look for help.

I titled this post “Version Control and Reproducible Research” … you might be wondering what version control has to do with this topic. The ideas described above are great if you are the sole purveyor of your code/project. But if you have your students working in teams, or are trying to collaborate yourself, this might not be exactly ideal. But it’s getting pretty close! Here’s the last nugget you need to make this work… version control. Or in this case, I’m specifically talking about using GitHub. The short version of what could be an entire separate post (I’ll probably try to do one eventually) is that git (and the cloud repository github) is the tool that software developers designed to facilitate collaborative software development without the desire to kill each other from broken code. It stores versions of code (or really any file) that can be jointly contributed to without breaking each other’s work. For now, I’ll point you to a few resources on this..

First, a bit more from an industry blog on workflows to promote reproduction using github — Stripe’s Notebooks and Github Post

Second, for using Git/GitHub with R — Jenny Bryan, Prof. University of British Columbia — Note that this is a really long, complete webpage/workshop resource!

Third, a template/package for Python to help structure your reproducible git-hub work — Cookiecutter Data Science —  (heck, this could be an entire lesson itself in how to manage a project– more on that later)

Fourth, a template/package for R to help structure your reproducible git-hub/R work — ProjectTemplate

 

Blog Intro and Information

Welcome to “From the Director’s Desk” a blog about data science education and curriculum. If you are interested in receiving regular updates when new posts appear you can use the RSS feed link above, or subscribe to the google-group (read more for the link, you don’t need a gmail-account to subscribe!). You can find a bit more about me, Karl Schmitt on the about page. If you are looking for full degree curriculum development materials I’ve created a resource page and tracked posts with a Program Development category. Individual course materials are tracked either generally with the “Course Development” category, or individually by each course the post relates to. Please feel free to email me or leave comments if you have questions, thoughts or something to share!

The original blog introduction, with a bit of why the blog exists and what it seeks to cover is here.

Math (Courses) for Data Science

I want to share some thoughts on the math required for a data scientist (or at least, a data science undergraduate degree). The discussion can really be boiled down to one question: “Discrete Mathematics or Calculus 2?” Let’s first take a look at the outcomes from an in-progress and two completed working groups on outlining data science education.

An ACM organized workshop in 2015 included participants from ACM, ASA, IEEE-CS, AMS and more. That workshop’s report does not explicitly state any math requirements, but does make clear the need for sufficient supporting statistics courses. The clearest recommendations come from a group of faculty at the Park City Mathematical Institute in the summer of 2016. Their report gives suggestions on how to make a data science degree from existing courses and ideas for new integrated courses (this is the real gold in the report). If constructing a curriculum from existing courses, the group recommends three mathematics courses: Calculus 1, 2, and Linear Algebra. Last, a series of round-table discussions is currently underway by the National Academy of Science to discuss Data Science Post-Secondary Education. While all three NAS round tables are interesting, only the first is relevant to this discussion. At that meeting, there was a presentation on the underlying mathematics of data science. Their list of mathematics supporting data science included linear algebra, numerical analysis and graph theory.

In summary, all three groups clearly support the need for linear algebra to be a part of any data science curriculum. I doubt you’ll find many objections to this idea since linear algebra forms the mathematical foundation for manipulating data contained in tables or arrays as rows/columns. If nothing else, simply learning the notation is vitally important for anyone wanting to extend algorithms for data science. All three also clearly support at least two traditional statistics courses, up through regression analysis. A little less clearly, I would argue that all three support the requirement of a Calc 1 course. The NAS round-table discussed needing numerical analysis, which is traditionally based on calculus concepts. The ACM workshop supported disciplinary knowledge and just about all science disciplines require at least one semester of calculus.

Alright, on to the differences. The PCMI group included Calculus 2 in their “minimum” courses needed for data science. In my opinion, the suggestion that Calc-2 be included in the bare minimum courses for data science is indicative of the dominance of mathematicians (many applied) and statisticians in the group (there were a FEW computer scientists). While I think overall they are quite good, I think the inclusion of Calc 2 over discrete mathematics (as well as the odd location of data mining) clearly reflect this make-up. The presentation on mathematics (from two mathematicians) at the first NAS however included graph theory as one of the three main supporting mathematical areas. So, perhaps the question from these two groups is: “Calculus 2 or Discrete Mathematics?”

Here’s an alternative way to build an answer to this question. Instead of just focusing on the topics covered, what about the requirements for the other supporting disciplines that make up data science? Computer Science is pretty easy. Almost all programs require Calculus 1 and discrete mathematics, and the ACM 2013 guidelines include a list of core topics (set theory, graph theory and logic) that are traditionally covered in either a discrete mathematics course, or a combination of several mathematics courses. They also articulate very clearly that for some areas of computer science (like visualization or data science) linear algebra and statistics will be required. We can contrast this with typical mathematics requirements for statistics curriculum. For many statistics programs, a minimum of Calc 2 is required to support advanced probability courses (with a preference for multivariable calculus). The ASA 2014 guidelines specify that statistics majors should have both differentiation and integration (typically covered by Calc 1 and 2), and linear algebra.

Development from supporting disciplines can leave us just as confused as to what to require. I think there is an answer, but it requires taking off the mathematician glasses, and thinking about jobs, applications, and where a student might be headed. First, a good portion of researchers and practitioners doing data science use graphs and networks, often doing mining on those graphs for information. Turns out graphs (the node/edge type, not the line/bar plot type) are also a great way to visualize a lot of information. Another key skill when doing data science is the ability to partition data. That is, to think of data as either meeting, or not meeting specific criteria. This is encompassed in set theory in mathematics, and is sometimes partially covered as part of logic. Together these topics provide two new ways of thinking about data that aren’t included in other mathematics courses. The need for this sort of knowledge, and a basic introduction to proofs is why discrete mathematics courses came into existence, to allow CS majors to get these topics without taking another 3 or 4 mathematics courses. To me, this is a far stronger case for including discrete mathematics than the (possible) need of Calculus 2 for advanced statistics courses. If you are requiring 4 math courses, by all means, include Calculus 2 next. Or, if a student is particularly interested in understanding the theoretical underpinnings of data science (by taking more statistics courses) then they should take Calc 2. But if we are really thinking about an undergraduate degree as a stand-alone, prepared to enter the work force degree, Calc 2 does not seem to add a lot of direct value to the student’s degree.

Intro to Data Science Books

Friday morning I had the opportunity to chat with Kathryn (Katie) Kinnaird, currently an Applied Mathematics Post-Doc at Brown University, and former director of the Data Science TRAIn Lab at Macalester College. Originally I had called to ask her about the methodology behind the TRAIn lab and materials related to it (it’s a neat approach and something I may adopt some for my own research students) but we got to talking about the introduction to data science courses we taught this year and the textbooks we used. As a bit of a preview, I used two textbooks, Doing Data Science by Cathy O’Neil and Rachel Schutt, and Data Science from Scratch by Joel Grus. Katie used Analyzing Data with GraphPad Prism by Harvey Motulsky. I’m going to provide a short discussion of what I was looking for in my textbooks(s), what I ended up getting, and ask for a bit of reader input.

Criteria:

When I was reviewing textbooks for Data 151, I had some fairly specific constraints in mind. First, I wanted a book that dealt with data science, not just data mining, machine learning, or statistics. That ruled out a lot of books, but there were still a few left to pick from. I also wanted a book that used either Python or R as its primary coding language, since I believe (based on conferences and literature) that these will be the languages of the Realm long-term (more on this in another post). This excluded a few more books, including the book Katie used since it focused on the Prism software package. It also excluded the textbook I previously used for a data mining class, Data Mining: Practical Machine Learning Tools and Techniques  by Witten, Frank, Hall, and Pal which uses Weka.

Alright, so general data science, R or Python. The last requirement I had was that the book didn’t assume the reader knew a lot (or any) coding. Wait…no coding experience? Yep. Data 151 is targeted at freshman and interdisciplinary students who may well have zero coding experience. Katie’s class was even less likely to have coding experience (she had a fall class instead of my spring class). There are a couple of books out there that seem to be targeted at upper-tier undergraduates and early graduate students, for example Introduction to Data Science by Ingul and Segui, but very few that are technical while still aimed at novices. That’s how I landed on Doing Data Science and Data Science from Scratch. They were basically the only books I could find that even sort of fit my criteria. And even Doing Data Science had a bit of an assumption of some programming. So how did they work out?

Doing Data Science – Straight Talk from the Frontline:

I really like the description that becomingadatascientist.com used (read the whole review here):
“To make a metaphor, Rachel Schutt and Cathy O’Neil tell you about a great dish someone cooked, and give some general info about the process of making the dish, and what to watch out for when you attempt it yourself. They even include some quotes from the chef about the art of making this particular dish, and tips on preparing and presenting it.”

It’s true, they cover everything you might want to touch on in an intro class and generally do so in a very high-level, newbie friendly way. There’s a few chapters that get way to technical (for my purposes), but those can be glossed over. So, what’s the down-side? Sadly, something written in 2013 has every potential to be out of date in the data science world unless very carefully written. Read carefully,  they make several comments that I don’t feel are entirely true any more. Second, the code in the book is all in R, and I had really planned to exclusively use Python. Third, while I wanted the book to not assume programming knowledge, I also didn’t want it to ignore acquiring (some) programming knowledge (it does). Last, while generally accessible, it is still clearly written for graduate students or independent learners. That is to say, not really for brand new students who aren’t comfortable pulling out the key information they need. It also was not written as a textbook-per-se. It doesn’t really have any true exercises included (a few are scattered throughout from the class the book was based on).

Data Science from Scratch – First Principles with Python:

Great, first principles, using Python, everything I wanted right? Yep, it was, if I were only teaching data science majors or computer science majors. I could probably get away with no programming background (though probably not or so my business students said). Really, if you are looking at implementing an Intro to Data Science course as a sophomore/junior level elective in a computer science program, or possibly a Math/Stat program with a programming prerequisite, this is a reasonable choice. It covers the minimum statistics and mathematics you need (including linear algebra!) and doesn’t really assume you know tons of Python.

So what’s wrong with this text? For a general course, its simply too technical. While it does cover several important ideas at a high level, the students lost those few comments in the nitty-gritty of code implementation and examples. Furthermore, while I really liked the idea behind the text, that you will implement everything you need to do data science from scratch (or from things earlier in the book) that turned out to actually felt like a giant step backwards. There are quite a few fantastic packages for data science, from the really well known like Pandas and Sci-Kit learn to lesser known like Bokeh. I understand wanting students to know from the ground up how things work, but you can understand things while still using well-tested packages (and avoid all your own errors).

Something New?

So, bringing it back around to my conversation with Katie on Friday. She had also expressed frustration with her textbook in terms of reaching the intended audience (freshman, new students). I wasn’t happy in the end with either textbook. Next year I’ll probably cut Data Science from Scratch as it really didn’t do what I needed. A lot of the other intro courses I know about don’t even use a textbook. So, here’s a few questions I’d like to pose to my readers:

  • Are there other textbooks that you are using, or know of that fit these needs?
  • What are your “must-haves” or “must-avoids” for a textbook for freshman-targeted courses in data science?

Student Learning Objectives – Part 4

This post is part of a series on student learning objectives (SLO’s) for both curriculum and courses. The SLO’s in this post are course level, specifically topical objectives for an “Introduction to Data Science” (Data 151) class for new students. Love them or hate them, student learning objectives are a part of higher education (I for one appreciate how they provide focus for curriculum and courses).

The last post focused on high-level learning objectives for the course “Introduction to Data Science” (I’ve repeated them below for reference). Those are certainly the big picture, but those four objectives are hardly enough to really design day-to-day lessons around. Data 151 also has seven topical objectives tied directly to those general objectives and modeled after Paul Anderson’s DISC 101 course objectives. I’ll tie each topical objective back to the course’s overall goals.

General Course Objectives:

A. Students understand the fundamental concepts of data science and knowledge discovery
B. Students can apply and perform the basic algorithmic and computational tasks for data science
C. Students develop and improve analytical thinking for problem formulation and solution validation, especially using technology
D. Students prepare for success in a world overflowing with data.

Topical Objectives:

  1. gain an overview of the field of knowledge discovery (A)
  2. learn introductory and state-of-the-art data mining algorithms (A,B)
  3. be able to distinguish and translate between data, information, and knowledge (A, C)
  4. apply algorithms for inductive and deductive reasoning (B,C)
  5. apply information filtering and validation on real world datasets (B,C)
  6. understand the social, ethical, and legal issues of informatics and data science (A,D)
  7. apply data mining, statistical inference, and machine learning algorithms to a variety of datasets including text, image, biological, and health (B,D)

Four of the topical objectives (1,2, 3 & 6) focus on guiding students towards understanding the fundamental concepts behind data science. One can hardly call a course an “introduction” without giving an overall picture of the field (Obj. 1) or spending time understanding key tools that practitioners use (Obj. 2). While I fully anticipate that the state-of-the-art algorithms will change, the basics like k-Nearest Neighbor, k-Means, and Decision Trees will certainly not. These algorithms provide a nice gateway into understanding the ideas of learning from a collection of data (Obj. A).

We also know in data science that what you can learn from a data-set is limited by the quality of the input data (like a lot of other things in life, garbage-in = garbage-out). Objectives 5 & 7 articulate the sorts of data that will be used in the course, both real-world data and a mix of prepared/known data sets. These data sets provide a way to actually practice Objectives 2 & 4 in more than just an abstract way. I want students to walk away from this class knowing how practitioners actually make use of algorithms. Students need to get their hands dirty putting some of those algorithms to work (Obj. B/C).

Now, I think it’s important to note here that in their projects and general work, I’m not expecting a really deep understanding or application of the algorithms. That’s saved for two later courses, one explicitly on data mining and the other their capstone sequence. In Data 151 they should be learning enough to continue learning on their own, understand and interact with people who are really doing this work, and to grasp how the ideas can and are shaping the evolution of various disciplines or industries.

While Objectives 2, 4 & 5 articulate using data science skills, Objectives 2-5 have a second layer as well. These objectives aim to have students think about the implications and knowledge that comes from the data science process. This course is about more than just data engineering or data mining, it’s really about the questions and, well, science that can done with data. It is only when students can understand the processes of both inductive and deductive reasoning for science, or transform raw data into actionable knowledge that they become aware of the true power of the field (Obj. B/C).

Last, but certainly not least, Objective 6. As we know from Spider-Man (and some other great speeches), “With great power comes great responsibilities.” If you believe, like I do, that data science could dramatically change what we know and how industries and society is run… then I hope you are also a little nervous, perhaps occasionally terrified. Because if we DON’T talk about the social, ethical, and legal issues surrounding informatics and data science we might well end up with something like Ultron (the artificial intelligence gone bad in Marvel’s “Avengers: Age of Ultron”). More likely, we’ll end up with biased learning algorithms that perpetuate injustices or inequality. Making sure students have at least started to think about these sorts of issues may not prevent them from happening, but it is one (in my mind necessary) step towards that goal (Obj. D).

Together this is a pretty hefty set of things to accomplish in a semester. All in all though, I think they serve as a great lead into the entire field, and the overall goals of Valpo’s Data Science program (described in previous posts). Even if a student only takes Data 151 (as some certainly will), they will leave with a broad understanding of the field, enough knowledge to interact successfully with experts, and enough insight to see the real value that the effective and intelligent use of data can provide. I hope my business students are now prepared to be the “data-savvy business managers” that McKinsey & Co. described a few years ago and that the rest (C.S., Math and Stats) can work with, or become true data scientists, engineers, or creators.

Student Learning Objectives – Part 3

This post is part of a series on student learning objectives (SLO’s) both for curriculum and courses. The SLO’s in this post are course level, specifically for an “Introduction to Data Science” (Data 151) class for new students. Love them or hate them, student learning objectives are a part of higher education (I for one appreciate how they provide focus for curriculum and courses).

In many ways, the general course SLO’s for Data 151 mirror the SLO’s for the program as a whole. Students need to leave with an understanding of what data science is, know about the basic algorithms, and be made aware of the ethic and moral issues surrounding the use of data. Data 151 is intended to be a hook that draws in students from across our university to learn about data and then consider adding a major in Data Science. It also draws in juniors and seniors in less technical disciplines like business. This  may in turn make Data 151 the only course where a student explicitly thinks about data. The major difference between the curricular and course SLO’s is the depth of understanding I expect students to leave the course with (as opposed to the program). This is most clear in the first two SLO’s below.

  1. Students understand the fundamental concepts of data science and knowledge discovery
  2. Students can apply and perform the basic algorithmic and computational tasks for data science

As said, these are very close to the first two SLO’s for the whole curriculum and related to both their ability to communicate data science concepts and also their ability to implement solutions, though in both cases with lower levels of expertise. Data 151 has two additional SLO’s that target the broader (potential) audience for the course (in addition to continuing majors). These are:

3. Students develop and improve analytical thinking for problem formulation and solution validation, especially using technology
4. Students prepare for success in a world overflowing with data.

In many cases, students in Intro to Data Science are still gaining experience (aren’t we all?) with general problem solving skills. Perhaps (to my mind) one of the most under-taught skills in STEM courses is how to actually formulate and structure the process of solving a problem. In many, many cases, a significant amount of time can be saved in the execution of problem solving by carefully planning out how you are going to explore or solve a problem. Data science even has this explicitly built into several locations in a typical workflow, specifically performing exploratory data analysis and planning for solution validation.

Meanwhile, the final objective is meant to really be a catch-all. The field of data science is changing incredibly rapidly, as are the ways data is generated and used. I wanted Data 151 to be something that is capable of covering current, bleeding-edge topics. This SLO also nicely encompasses my plans to bring in alumni and current practitioners as speakers to give the students insight into what future jobs might look like. Bringing in these speakers also provides a chance for students to get an industry perspective on workflows and processes, something that can be very different from academia’s problem solving process.

These SLO’s are pretty high-level, but intentionally so. At Valpo, we’ve got both “course objectives” and also topical objectives. My next post will take a look at the specific, topical objectives for Data 151, which deal with the more nitty-gritty topics of what will actually get covered in Data 151.