Pros and Cons of Top Data Science Online Courses

There are a variety of data science courses online, but which one is the best? Find out the pros and cons of each!

Coursera, EdX, etc

These MOOCs have been around for several years now and continue to grow. But are they really the best option for learning online?

Pros:

  • Lots of Topics including R and Python
  • Affordable and even a free option
  • Well thought out curriculum from professors in great schools

Cons:

  • Not easily translatable to industry
  • Not taught by current industry professionals, but instead academics

Now, these MOOCs are still worth checking out and seeing if it works for you, but beware that you may feel tired of analyzing the iris data set.

PluralSight

Pros:

  • Lots of Topics in R, Python, and databases
  • Easy to skip around through the user interface instead of going in order
  • Taught by industry veterans in top companies that know current trends and expectations
  • You can use your own apps -Anaconda and RStudio – on your computer and not in the website itself

Cons:

  • Still just a bit limited on their data courses, but still growing quickly

DataCamp

Pros:

  • Great options for beginners to intermediate
  • Courses build on each other, fairly good examples
  • Most instructors have spent time in the industry

Cons:

  • You have to use their in website coding tool
  • Exercises are not always that clear
  • Never know if your app will work the same way on your own computer

So that’s a quick overview of options for learning online. Of course blogs are fantastic, too, and stack overflow can really be helpful!

Feel free to add your recommendations, too!

Check out PluralSight’s great offer today!

Shapiro-Wilk Test for Normality in R

I think the Shapiro-Wilk test is a great way to see if a variable is normally distributed. This is an important assumption in creating any sort of model and also evaluating models.

Let’s look at how to do this in R!

shapiro.test(data$CreditScore)

And here is the output:

Shapiro-Wilk normality test
data:  data$CreditScore
W = 0.96945, p-value = 0.2198

So how do we read this? It looks like the p-value is too high. But it is not. The threshold for the p-value is 0.05. So here we fail to reject the null hypothesis. We don’t have enough evidence to say the population is not normally distributed.

Let’s make a histogram to take a look using base R graphics:

hist(data$CreditScore, 
     main="Credit Score", 
     xlab="Credit Score", 
     border="light blue", 
     col="blue", 
     las=1, 
     breaks=5)

Our distribution likes nice here:

Great! I would feel comfortable making more assumptions and performing some tests.

Dollar Signs and Percentages- 3 Different Ways to Convert Data Types in R

Working with percentages in R can be a little tricky, but it’s easy to change it to an integer, or numeric, and run the right statistics on it. Such as quartiles and mean and not frequencies.

data$column = as.integer(sub("%", "",data$column))

Essentially you are using the sub function and substituting the “%” for a blank. You don’t lose any decimals either! So in the end just remember that those are percentage amounts.

Next example is converting to a factor

data$column = as.factor(data$column)

Now you can read the data as discrete. This is great for categorical and nominal level variables.

Last example is converting to numeric. If you have a variable that has a dollar sign use this to change it to a number.

data$balance = as.factor(gsub(",", "", data$balance))
data$balance = as.numeric(gsub("\\$", "", data$balance))

Check out the before

Balance   : Factor w/ 40 levels "$1,000","$10,000",..: 
Utilization  : Factor w/ 31 levels "100%","11%","12%",

And after

Balance      : num  11320 7200 20000 12800 5700 ...
Utilization  : int  25 70 55 65 75 

I hope this helps you with your formatting times! So simple and easy and you’ll be able to summarize your data!

Unsupervised Machine Learning in R: K-Means

K-Means clustering is unsupervised machine learning because there is not a target variable. Clustering can be used to create a target variable, or simply group data by certain characteristics.

Here’s a great and simple way to use R to find clusters, visualize and then tie back to the data source to implement a marketing strategy.

setwd
#import dataset
ABC <-read.table("AbcBank.csv",header=TRUE, 
                  sep=",")

#choose variables to be clustered 
# make sure to exclude ID fields or Dates
ABC_num<- ABC[,2:5]
#scale the data! so they are all normalized 
ABC_scaled <-as.data.frame(scale(ABC_num))

#kmeans function
k3<- kmeans(ABC_scaled, centers=3, nstart=25)
#library with the visualization
library(factoextra)
fviz_cluster(k3, data=ABC_scaled,
             ellipse.type="convex",
             axes =c(1,2),
             geom="point",
             label="none",
             ggtheme=theme_classic())
#check out the centers 
# remember these are normalized but 
#higher values are higher values for the original data
    k3$centers          
#add the cluster to the original dataset!
    ABC$Cluster<-as.numeric(k3$cluster)
    

Check out our awesome clusters:

Repo here with dataset: https://github.com/emileemc/kmeans