# Choropleth Map in ggplot2

Creating a map in ggplot2 can be surprisingly easy! This tutorial will show the US by state. The dataset is from 1970 and will show some US statistics including population, income, life expectancy, and illiteracy.

I love making maps, while predictive statistics provide such great insight, map making was one thing that really made my interested in data science. I’m also glad R provides a great way to make them.

I’d also recommend plotly package where you can make it interactive as you scroll over. All within R!

Here is the first map we will make:

This is population by state in 1970 US.

library(ggplot2)
library(dplyr)

states<-as.data.frame(state.x77)
states\$region <- tolower(rownames(states))
states_map <- map_data(“state”)
fact_join <- left_join(states_map, states, by = “region”)

ggplot(fact_join, aes(long, lat, group = group))+
geom_polygon(aes(fill = Population), color = “white”)+
scale_fill_viridis_c(option = “C”)+
theme_classic()

For the next graph the code will be mostly similar but I will change the fill = option.

Let’s try per capita income:

This is great. We’re able to see the income range through the color fill of each state.

ggplot(fact_join, aes(long, lat, group = group))+
geom_polygon(aes(fill = Income), color = “white”)+
scale_fill_viridis_c(option = “C”)+
theme_classic()

Last one we’ll make is life expectancy:

Great info here! Life expectancy, in the 1970s by state! That particular variable needed a little extra coding, see below:

fact_join\$`Life Exp` <- as.numeric(fact_join\$`Life Exp`)

ggplot(fact_join, aes(long, lat, group = group))+
geom_polygon(aes(fill = `Life Exp`), color = “white”)+
scale_fill_viridis_c(option = “C”)+
theme_classic()

Enjoy your maps! Also this dataset is publicly available so feel free to recreate.

# Shapiro-Wilk Test for Normality in R

I think the Shapiro-Wilk test is a great way to see if a variable is normally distributed. This is an important assumption in creating any sort of model and also evaluating models.

Let’s look at how to do this in R!

```shapiro.test(data\$CreditScore)
```

And here is the output:

```Shapiro-Wilk normality test
data:  data\$CreditScore
W = 0.96945, p-value = 0.2198
```

So how do we read this? It looks like the p-value is too high. But it is not. The threshold for the p-value is 0.05. So here we fail to reject the null hypothesis. We don’t have enough evidence to say the population is not normally distributed.

Let’s make a histogram to take a look using base R graphics:

```hist(data\$CreditScore,
main="Credit Score",
xlab="Credit Score",
border="light blue",
col="blue",
las=1,
breaks=5)
```

Our distribution likes nice here:

Great! I would feel comfortable making more assumptions and performing some tests.

# Dollar Signs and Percentages- 3 Different Ways to Convert Data Types in R

Working with percentages in R can be a little tricky, but it’s easy to change it to an integer, or numeric, and run the right statistics on it. Such as quartiles and mean and not frequencies.

```data\$column = as.integer(sub("%", "",data\$column))

```

Essentially you are using the sub function and substituting the “%” for a blank. You don’t lose any decimals either! So in the end just remember that those are percentage amounts.

Next example is converting to a factor

```data\$column = as.factor(data\$column)
```

Now you can read the data as discrete. This is great for categorical and nominal level variables.

Last example is converting to numeric. If you have a variable that has a dollar sign use this to change it to a number.

```data\$balance = as.factor(gsub(",", "", data\$balance))
data\$balance = as.numeric(gsub("\\\$", "", data\$balance))
```

Check out the before

```Balance   : Factor w/ 40 levels "\$1,000","\$10,000",..:
Utilization  : Factor w/ 31 levels "100%","11%","12%",
```

And after

```Balance      : num  11320 7200 20000 12800 5700 ...
Utilization  : int  25 70 55 65 75
```

I hope this helps you with your formatting times! So simple and easy and you’ll be able to summarize your data!

# Unsupervised Machine Learning in R: K-Means

K-Means clustering is unsupervised machine learning because there is not a target variable. Clustering can be used to create a target variable, or simply group data by certain characteristics.

Here’s a great and simple way to use R to find clusters, visualize and then tie back to the data source to implement a marketing strategy.

```setwd
#import dataset
sep=",")

#choose variables to be clustered
# make sure to exclude ID fields or Dates
ABC_num<- ABC[,2:5]
#scale the data! so they are all normalized
ABC_scaled <-as.data.frame(scale(ABC_num))

#kmeans function
k3<- kmeans(ABC_scaled, centers=3, nstart=25)
#library with the visualization
library(factoextra)
fviz_cluster(k3, data=ABC_scaled,
ellipse.type="convex",
axes =c(1,2),
geom="point",
label="none",
ggtheme=theme_classic())
#check out the centers
# remember these are normalized but
#higher values are higher values for the original data
k3\$centers
#add the cluster to the original dataset!
ABC\$Cluster<-as.numeric(k3\$cluster)

```

Check out our awesome clusters:

Repo here with dataset: https://github.com/emileemc/kmeans