Skip to main content

CORRELATION


Pearson correlation
A Pearson correlation measures the strength of the linear relationship between two continuous variables. A linear relationship is one that can be captured by drawing a straight line on a scatterplot between the two variables of interest. The value of the correlation provides information both about the nature and the strength of the relationship.

Correlations range between -1.0 and 1.0.
The sign of the correlation describes the direction of the relationship. A positive sign indicates that as one variable gets larger the other also tends to get larger, while a negative sign indicates that as one variable gets larger the other tends to get smaller.
The magnitude of the correlation describes the strength of the relationship. The further that a correlation is from zero, the stronger the relationship is between the two variables. A zero correlation would indicate that the two variables aren’t related to each other at all.

Correlations only measure the strength of the linear relationship between the two variables. Sometimes you have a relationship that would be better measured by a curve of some sort rather than a straight line. In this case the correlation coefficient would not provide a very accurate measure of the strength of the relationship. If a line accurately describes the relationship between your two variables, your ability to predict the value of one variable from the value of the other is directly related to the correlation between them. When the points in your scatterplot are all clustered closely about a line your correlation will be large and the accuracy of the predictions will be high. If the points tend to be widely spread your correlation will be small and the accuracy of your predictions will be low.

The Pearson correlation assumes that both of your variables have normal distributions. If this is not the case then you might consider performing a Spearman rank-order correlation instead (described below).

To perform a Pearson correlation in SPSS
Choose Analyze thengoto Correlate thengoto Bivariate.
Move the variables you want to correlate to the Variables box.
Click the OK button.

The output of this analysis will contain the following section.
Correlations. This section contains the correlation matrix of the variables you selected. A variable always has a perfect correlation with itself, so the diagonals of this matrix will always have values of 1. The other cells in the table provide you with the correlation between the variable listed at the top of the column and the variable listed to the left of the row. Below this is a p-value testing whether the correlation differs significantly from zero. Finally, the bottom value in each box is the sample size used to compute the correlation.

Point-biserial correlation
The point-biserial correlation captures the relationship between a dichotomous (two-value) variable and a continuous variable. If the analyst codes the dichotomous variable with values of 0 and 1, and then computes a standard Pearson correlation using this variable, it is mathematically equivalent to the point-biserial correlation. The interpretation of this variable is similar to the interpretation of the Pearson correlation. A positive correlation indicates that group associated with the value of 1 has larger values than the group associated with the value of
0. A negative correlation indicates that group associated with the value of 1 has smaller values than the group associated with the value of 0. A value near zero indicates no relationship between the two variables.

To perform a point-biserial correlation in SPSS.
Make sure your categories are indicated by values of 0 and 1.
Obtain the Pearson correlation between the categorical variable and the continuous variable, as discussed above.

The result of this analysis will include the same sections as discussed in the Pearson correlation section.

Spearman rank correlation
The Spearman rank correlation is a nonparametric equivalent to the Pearson correlation. The Pearson correlation assumes that both of your variables have normal distributions. If this assumption is violated for either of your variables then you may choose to perform a Spearman rank correlation instead. However, the Spearman rank correlation is a less powerful measure of association, so people will commonly choose to use the standard Pearson correlation even when the variables you want to consider are moderately nonnormal. The Spearman Rank correlation is typically preferred over Kenda’s tau, another nonparametric correlation measure, because its scaling is more consistent with the standard Pearson correlation.

To perform a Spearman rank correlation in SPSS
Choose Analyze thengoto Correlate thengoto Bivariate.
Move the variables you want to correlate to the Variables box.
Check the box next to Spearman.
Click the OK button.

The output of this analysis will contain the following section.
Correlations. This section contains the correlation matrix of the variables you selected. The Spearman rank correlations can be interpreted in exactly the same way as you interpret a standard Pearson correlation. Below each correlation SPSS provides a p-value testing whether the correlation is significantly different from zero, and the sample size used to compute the correlation.

Popular posts from this blog

Structure of a Research Article

UNIT ROOT TEST

Stationarity and Unit Root Testing l   The stationarity or otherwise of a series can strongly influence its behaviour and properties - e.g. persistence of shocks will be infinite for nonstationary series l   Spurious regressions. If two variables are trending over time, a regression of one on the other could have a high R 2 even if the two are totally unrelated l   If the variables in the regression model are not stationary, then it can be proved that the standard assumptions for asymptotic analysis will not be valid. In other words, the usual “ t -ratios” will not follow a t -distribution, so we cannot validly undertake hypothesis tests about the regression parameters. Stationary and Non-stationary Time Series Stationary Time Series l   A series is said to be stationary if the mean and autocovariances of the series do not depend on time. (A) Strictly Stationary : n   For a strictly stationary time series the distribution of   y(t) is independent of t .   Thus it is not just

ISI Journals - Economics

1. ACTUAL PROBLEMS OF ECONOMICS 2. AGRICULTURAL ECONOMICS 3. AGRICULTURAL ECONOMICS-ZEMEDELSKA EKONOMIKA 4. AMERICAN ECONOMIC JOURNAL-APPLIED ECONOMICS 5. AMERICAN JOURNAL OF AGRICULTURAL ECONOMICS 6. AMERICAN JOURNAL OF ECONOMICS AND SOCIOLOGY 7. AMERICAN LAW AND ECONOMICS REVIEW 8. ANNALS OF ECONOMICS AND FINANCE 9. ANNUAL REVIEW OF ECONOMICS 10. ANNUAL REVIEW OF FINANCIAL ECONOMICS 11. ANNUAL REVIEW OF RESOURCE ECONOMICS 12. ANNUAL REVIEW OF RESOURCE ECONOMICS 13. APPLIED ECONOMICS 14. APPLIED ECONOMICS LETTERS 15. AQUACULTURE ECONOMICS & MANAGEMENT 16. ASIA-PACIFIC JOURNAL OF ACCOUNTING & ECONOMICS 17. AUSTRALIAN JOURNAL OF AGRICULTURAL AND RESOURCE ECONOMICS 18. B E JOURNAL OF THEORETICAL ECONOMICS 19. BALTIC JOURNAL OF ECONOMICS 20. CAMBRIDGE JOURNAL OF ECONOMICS 21. CANADIAN JOURNAL OF AGRICULTURAL ECONOMICS-REVUE CANADIENNE D AGROECONOMIE 22. CANADIAN JOURNAL OF ECONOMICS-REVUE CANADIENNE D ECONOMIQUE 23. COMPUTATIONAL ECONOMICS 24. DEFENCE AND PEACE ECONOMICS 25. EA