Niche Best Athletics Ranking Methodology

The Best Athletics ranking provides a comprehensive assessment of the quality of the athletic programs at traditional four-year colleges and universities in the United States. It uses data sourced from the Department of Education, NCAA Attendance Records, and 204,756 opinion-based survey responses about athletics from 74,957 current students and recent alumni.

A high ranking in Athletics generally indicates that:

  • Students report athletics playing a significant, positive role in student life, including varsity sports, intramural sports, and athletic facilities;
  • The college competes for National Championships across various sports, and games are well attended;
  • The college competes in top conferences, whose members win National Championships across various sports, with high average game attendance.

Colleges Assessed by this Ranking

At the time of calculation, our database contained records for 2,245 public and private, traditional four-year colleges and universities across the United States. For the purposes of this ranking, a “traditional” college is considered to be any accredited, non-profit post-secondary institution that primarily offers four-year degree programs (as opposed to two-year or less). Some colleges were not included in this ranking if: (1) they were not located in one of the 50 U.S. states, Puerto Rico, or the District of Columbia; (2) they had fewer than 100 full-time undergraduate students; or (3) they had insufficient data (see below). The final ranking results in 1,410 colleges receiving a grade, with 274 of those also receiving a numerical ranking.

Factors Considered

FactorDescriptionSourceWeight
Student Survey Responses Student opinions about the quality of the athletics at the college they currently or recently attend(ed). Includes 204,756 opinions from 74,957 unique students. Minimum 10 unique students required at each college. Niche users 32%
NCAA Championship Score Number of NCAA Championships won since 2000 across Division I, II, and III. Eight sports are measured, including football, men's and women's basketball, baseball, softball, men's and women's soccer, and hockey. Championships were weighted by division and sport, with Division I getting a 3x multiplier and Division II getting a 2x multiplier. Football and men's basketball received a 3x multiplier, and baseball and hockey received a 2x multiplier. FCS Football was also included but did not receive a multiplier. Various sources 15%
Average Home Football Attendance Average attendance of home football games. NCAA Attendance Records 14%
Average Home Men's Basketball Attendance Average attendance of home men's basketball games. NCAA Attendance Records 11%
Conference Average Football Attendance Average attendance of home football games within conference. NCAA Attendance Records 7%
Conference Average Men's Basketball Attendance Average attendance of home men's basketball games within conference. NCAA Attendance Records 6%
NCAA Championship Score for Conference Number of NCAA Championships won within the same conference across Division I, II, and III. Eight sports are measured, including football, men's and women's basketball, baseball, softball, men's and women's soccer, and hockey. Championships by conference were weighted by division, with Division I getting a 3x multiplier and Division II getting a 2x multiplier. Various sources 5%
Men in Varsity Sports Percentage of male students participating in varsity sports. U.S. Department of Education 5%
Women in Varsity Sports Percentage of female students participating in varsity sports. U.S. Department of Education 5%

Statistics obtained from the U.S. Department of Education represent the most recent data available, usually from either 2012–2013 or 20132014, as self-reported by the colleges.

Computation

The process used to compute this ranking was as follows:

  1. First, we carefully selected the factors listed above to represent a healthy balance between statistical rigor and practical relevance in the ranking.
  2. Next, we evaluated the data for each factor to ensure that it provided value for the ranking. (The factor needed to help distinguish colleges from each other and accurately represent each college.) Because there are different factor types, we processed them differently:
    • Factors built from student-submitted survey responses were individually analyzed to determine a required minimum number of responses. After this, responses were aggregated. We logically have a higher degree of confidence in the aggregated score for colleges with more responses, so a Bayesian method was applied to reflect this confidence.
    • Factors built from factual information were inspected for bad data, including outliers or inaccurate values. Where applicable, this data was either adjusted or completely excluded depending on the specific data.
  3. After each factor was processed, we produced a standardized score (called a z-score) for each factor at each college. This score evaluates distance from the average using standard deviations and allows each college's score to be compared against others in a statistically sound manner.
  4. With clean and comparable data, we then assigned weights for each factor. The goal of the weighting process was to ensure that no one factor could have a dramatic positive or negative impact on a particular college's final score and that each college's final score was a fair representation of the college's performance. Weights were carefully determined by analyzing:
    • How different weights impacted the distribution of ranked colleges;
    • Niche student user preferences and industry research;
    • Each factor's contribution to our intended goal of the ranking described in the introduction above.
  5. After assigning weights, an overall score was calculated for each college by applying the assigned weights to each college's individual factor scores. This overall score was then assigned a new standardized score (again a z-score, as described in step 3). This is the final score for the ranking.
  6. With finalized scores, we then evaluated the completeness of the data for each individual college. Depending on how much data the college had, we might disqualify it from the numerical ranking or from the grading process. Here is how we distinguished these groups using the weights described in step 4:
    • Colleges missing the data for 50 percent or more of the factors (by weight) were completely excluded. They did not qualify for the numerical ranking or a grade. Note: This exclusion occurred before calculation of the final z-score.
    • Colleges that had all of the factors and more than 1,500 full-time undergraduate students were deemed eligible for both a grade and a numerical ranking. Colleges that did not have all of the factors or did not meet enrollment requirements were not included in the numerical ranking and received a grade only.
  7. Lastly, we created a numerical ranking and assigned grades (based on qualifications discussed in step 6). Here is how we produced these values:
    • The numerical ranking was created by ordering each college (when qualified) based on the final z-score discussed in step 5.
    • Grades were determined for each college (when qualified) by taking the ordered z-scores (which generally follow a normal distribution) and then assigning grades according to the process below.

Grading Process for This Ranking

While our ranking shows the Top 100 colleges, we use grades to provide the user some context to those rankings and also to provide insight into colleges that did not make the Top 100. It's important to focus on more than just the number in the ranking. Given the high number of colleges included in this ranking, there may not be a large gap between the 15th and 30th ranked colleges. In reality, both are exceptional colleges when compared to the total population of all colleges nationwide. Grades are assigned based on how each college performs compared to all other colleges included in the ranking by using the following distribution of grades and z-scores:

GradeFinal Z-ScoreCountDistribution
A+ 1.96 ≤ z 63 4.47%
A 1.28 ≤ z < 1.96 25 1.77%
A- 0.84 ≤ z < 1.28 41 2.91%
B+ 0.44 ≤ z < 0.84 128 9.08%
B 0.00 ≤ z < 0.44 369 26.17%
B- -0.44 ≤ z < 0 414 29.36%
C+ -0.84 ≤ z < -0.44 205 14.54%
C -1.28 ≤ z < -0.84 81 5.74%
C- -1.96 ≤ z < -1.28 63 4.47%
D+ -2.25 ≤ z < -1.96 10 0.71%
D -2.50 ≤ z < -2.25 5 0.35%
D- -2.50 > z 6 0.43%

Note that we intentionally did not assign a grade below D- to any colleges.

The Outcome

Of the 2,245 colleges analyzed, 1,410 received a grade, with 274 of those also receiving a numerical ranking. The top ranked college was University of Florida, which ranked very highly in most factors analyzed and had a final score that was more than six standard deviations above the mean college, an exceptionally high number. The next three colleges—University of AlabamaUniversity of North Carolina at Chapel Hill, and University of Connecticut, also had exceptionally high scores, all scoring more than five standard deviations above the mean college. All four colleges scored very highly in NCAA championships won, with each winning several championships across multiple sports since 2000.

It's important to note that several colleges scored very well but did not qualify for the numerical ranking due to insufficient data or ranked lower due to their status as an FCS football college or a Division II or III college. In particular, Amherst CollegeAppalachian State UniversityGrand Valley State UniversityMessiah CollegeMiddlebury CollegeNorth Dakota State UniversitySt. Norbert CollegeUniversity of Mount UnionUniversity of St. Thomas - MinnesotaUniversity of Wisconsin - Stevens PointUniversity of Wisconsin - Whitewater, and Washington University in St. Louis all won at least three NCAA Championships since 2000.

For questions or media inquiries, please contact us.