Choropleth Map in ggplot2

Creating a map in ggplot2 can be surprisingly easy! This tutorial will show the US by state. The dataset is from 1970 and will show some US statistics including population, income, life expectancy, and illiteracy.

I love making maps, while predictive statistics provide such great insight, map making was one thing that really made my interested in data science. I’m also glad R provides a great way to make them.

I’d also recommend plotly package where you can make it interactive as you scroll over. All within R!

Here is the first map we will make:

This is population by state in 1970 US.


states$region <- tolower(rownames(states))
states_map <- map_data(“state”)
fact_join <- left_join(states_map, states, by = “region”)

ggplot(fact_join, aes(long, lat, group = group))+
geom_polygon(aes(fill = Population), color = “white”)+
scale_fill_viridis_c(option = “C”)+

For the next graph the code will be mostly similar but I will change the fill = option.

Let’s try per capita income:

This is great. We’re able to see the income range through the color fill of each state.

ggplot(fact_join, aes(long, lat, group = group))+
geom_polygon(aes(fill = Income), color = “white”)+
scale_fill_viridis_c(option = “C”)+

Last one we’ll make is life expectancy:

Great info here! Life expectancy, in the 1970s by state! That particular variable needed a little extra coding, see below:

fact_join$`Life Exp` <- as.numeric(fact_join$`Life Exp`)

ggplot(fact_join, aes(long, lat, group = group))+
geom_polygon(aes(fill = `Life Exp`), color = “white”)+
scale_fill_viridis_c(option = “C”)+

Enjoy your maps! Also this dataset is publicly available so feel free to recreate.

Pros and Cons of Top Data Science Online Courses

There are a variety of data science courses online, but which one is the best? Find out the pros and cons of each!

Coursera, EdX, etc

These MOOCs have been around for several years now and continue to grow. But are they really the best option for learning online?


  • Lots of Topics including R and Python
  • Affordable and even a free option
  • Well thought out curriculum from professors in great schools


  • Not easily translatable to industry
  • Not taught by current industry professionals, but instead academics

Now, these MOOCs are still worth checking out and seeing if it works for you, but beware that you may feel tired of analyzing the iris data set.



  • Lots of Topics in R, Python, and databases
  • Easy to skip around through the user interface instead of going in order
  • Taught by industry veterans in top companies that know current trends and expectations
  • You can use your own apps -Anaconda and RStudio – on your computer and not in the website itself


  • Still just a bit limited on their data courses, but still growing quickly



  • Great options for beginners to intermediate
  • Courses build on each other, fairly good examples
  • Most instructors have spent time in the industry


  • You have to use their in website coding tool
  • Exercises are not always that clear
  • Never know if your app will work the same way on your own computer

So that’s a quick overview of options for learning online. Of course blogs are fantastic, too, and stack overflow can really be helpful!

Feel free to add your recommendations, too!

Check out PluralSight’s great offer today!

Shapiro-Wilk Test for Normality in R

I think the Shapiro-Wilk test is a great way to see if a variable is normally distributed. This is an important assumption in creating any sort of model and also evaluating models.

Let’s look at how to do this in R!


And here is the output:

Shapiro-Wilk normality test
data:  data$CreditScore
W = 0.96945, p-value = 0.2198

So how do we read this? It looks like the p-value is too high. But it is not. The threshold for the p-value is 0.05. So here we fail to reject the null hypothesis. We don’t have enough evidence to say the population is not normally distributed.

Let’s make a histogram to take a look using base R graphics:

     main="Credit Score", 
     xlab="Credit Score", 
     border="light blue", 

Our distribution likes nice here:

Great! I would feel comfortable making more assumptions and performing some tests.

Dollar Signs and Percentages- 3 Different Ways to Convert Data Types in R

Working with percentages in R can be a little tricky, but it’s easy to change it to an integer, or numeric, and run the right statistics on it. Such as quartiles and mean and not frequencies.

data$column = as.integer(sub("%", "",data$column))

Essentially you are using the sub function and substituting the “%” for a blank. You don’t lose any decimals either! So in the end just remember that those are percentage amounts.

Next example is converting to a factor

data$column = as.factor(data$column)

Now you can read the data as discrete. This is great for categorical and nominal level variables.

Last example is converting to numeric. If you have a variable that has a dollar sign use this to change it to a number.

data$balance = as.factor(gsub(",", "", data$balance))
data$balance = as.numeric(gsub("\\$", "", data$balance))

Check out the before

Balance   : Factor w/ 40 levels "$1,000","$10,000",..: 
Utilization  : Factor w/ 31 levels "100%","11%","12%",

And after

Balance      : num  11320 7200 20000 12800 5700 ...
Utilization  : int  25 70 55 65 75 

I hope this helps you with your formatting times! So simple and easy and you’ll be able to summarize your data!

Unsupervised Machine Learning in R: K-Means

K-Means clustering is unsupervised machine learning because there is not a target variable. Clustering can be used to create a target variable, or simply group data by certain characteristics.

Here’s a great and simple way to use R to find clusters, visualize and then tie back to the data source to implement a marketing strategy.

#import dataset
ABC <-read.table("AbcBank.csv",header=TRUE, 

#choose variables to be clustered 
# make sure to exclude ID fields or Dates
ABC_num<- ABC[,2:5]
#scale the data! so they are all normalized 
ABC_scaled <

#kmeans function
k3<- kmeans(ABC_scaled, centers=3, nstart=25)
#library with the visualization
fviz_cluster(k3, data=ABC_scaled,
             axes =c(1,2),
#check out the centers 
# remember these are normalized but 
#higher values are higher values for the original data
#add the cluster to the original dataset!

Check out our awesome clusters:

Repo here with dataset:

Easy R: Summary statistics grouping by a categorical variable

Once I found this great R package that really improves on the dplyr summary() function it was a game changer.

This library allows for the best summary statistics for each variable grouped by a categorical variable. It can also be saved as a list with an assignment.

credit %>% split(credit$Date) %>% map(summary)

Simply use datatable$column that is the categorical variable then use the map function to run summary. And that’s it! All set to produce results like these:

   Homeowner       Credit.Score   Years.of.Credit.History
 Min.   :0.0000   Min.   :485.0   Min.   : 2.00          
 1st Qu.:0.0000   1st Qu.:545.5   1st Qu.: 5.50          
 Median :0.0000   Median :591.0   Median : 9.00          
 Mean   :0.3704   Mean   :601.6   Mean   :10.33          
 3rd Qu.:1.0000   3rd Qu.:630.0   3rd Qu.:14.50          
 Max.   :1.0000   Max.   :811.0   Max.   :22.00          
 Revolving.Balance Revolving.Utilization    Approval        Loan.Amount
 $2,000  : 2       100%   : 3            Min.   :0.0000   $11,855 : 1  
 $27,000 : 2       65%    : 2            1st Qu.:0.0000   $12,150 : 1  
 $29,100 : 2       70%    : 2            Median :0.0000   $13,054 : 1  
 $1,000  : 1       78%    : 2            Mean   :0.1481   $15,451 : 1  
 $10,500 : 1       79%    : 2            3rd Qu.:0.0000   $16,218 : 1  
 $12,050 : 1       85%    : 2            Max.   :1.0000   $17,189 : 1  
 (Other) :18       (Other):14                             (Other) :21  
   Date    Default
 Aug :27   0:14   
 July: 0   1:13   
   Homeowner       Credit.Score   Years.of.Credit.History
 Min.   :0.0000   Min.   :620.0   Min.   : 2.0           
 1st Qu.:0.5000   1st Qu.:682.5   1st Qu.: 8.0           
 Median :1.0000   Median :701.0   Median :12.0           
 Mean   :0.7391   Mean   :711.8   Mean   :12.3           
 3rd Qu.:1.0000   3rd Qu.:746.5   3rd Qu.:16.5           
 Max.   :1.0000   Max.   :802.0   Max.   :24.0           
 Revolving.Balance Revolving.Utilization    Approval        Loan.Amount
 $11,200 : 2       11%    : 2            Min.   :0.0000   $3,614  : 2  
 $11,700 : 2       15%    : 2            1st Qu.:1.0000   $12,303 : 1  
 $6,100  : 2       20%    : 2            Median :1.0000   $12,338 : 1  
 $10,000 : 1       5%     : 2            Mean   :0.8261   $12,712 : 1  
 $10,500 : 1       7%     : 2            3rd Qu.:1.0000   $13,020 : 1  
 $11,320 : 1       70%    : 2            Max.   :1.0000   $17,697 : 1  
 (Other) :14       (Other):11                             (Other) :16  
   Date    Default
 Aug : 0   0:10   
 July:23   1:13   

You’ll have to do some formatting, or export to excel ! So fast and easy with this one.

R Weekly on Github

Check out this repo from on Github:

This is a great place to start with some coding fun and contribute to the community. Also hold you accountable.

Here’s an excerpt to their page:

R Weekly

R weekly provides weekly updates from the R community. You are welcome to contribute as long as you follow our code of conduct and our contributing guide.

How to contribute by using this repo

Update the draft post, and create a pull request.

Please respect the categories indicated in the contributing guide. The contributing guide also explains how to add images if necessary and when the weekly newsletter is frozen.

How to contribute without using Github

Submit your links or feeds for R Weekly posts and podcasts via going to

Note: Pleaseđź’ˇ Use W3C Feed Validation Service to checks the syntax of Atom or RSS feeds.


Talk with us!

Have a question or great idea about this website?

Talk with us on Twitter or Google Groups or via opening an issue.