Analytics

Throwing by the Numbers: Data-based pitching success stories by Guest User

As we’ve seen, there are many kinds of adjustments that pitchers can make to improve their performance. Some pitchers may change their arm slot, find new grips, or alter other elements of their delivery. In the end, there is a good chance that every pitcher can find subtle ways to improve no matter how good they already are. In the final instalment of The Art of Pitching, QSAO Analyst Nic Osanic investigates examples of pitchers decreasing the usage of their sinker and 2-Seamer in-favour of other pitches.

Read More

Throwing by the Numbers: An analytical take on the art of pitching by Guest User

Pitching philosophy in baseball has long been a game of conventional wisdom. This philosophy always seemed to make sense as the lower the pitch, the easier it is to swing over the top and hit it on the ground. As more data has become available in recent years, batters have started to adjust. As the MLB’s data-driven pitcher/batter pendulum constantly swings in response to such change, one must wonder how traditional attitudes towards fastball locations affect the modern-day MLB pitcher. In QSAO’s latest series of articles, analysts Nic Osanic takes a deeper look into the art of pitching.

Read More

In search of similarity: Finding comparable NHL players by Guest User

AnzeKopitar.jpeg

By Owen Kewell

The following is a detailed explanation of the work done to produce my public player comparison data visualization tool. If you wish to see the visualization in action it can be found at the following link, but I wholeheartedly encourage you to continue reading to understand exactly what you’re looking at:

https://public.tableau.com/profile/owen.kewell#!/vizhome/PlayerSimilarityTool/PlayerSimilarityTool

NHL players are in direct competition with hundreds of their peers. The game-after-game grind of professional hockey tests these individuals on their ability to both generate and suppress offense. As a player, it’s almost guaranteed that some of your competitors will be better than you on one or both sides of the puck. Similarly, you’re likely to be better than plenty of others. It’s also likely that there are a handful of players league-wide whose talent levels are right around your own.

The NHL is a big league. In the 2017-18 season, 759 different skaters suited up for at least 10 games, including 492 forwards and 267 defensemen. In such a deep league, each player should be statistically similar to at least a handful of their peers. But how to find these league-wide comparables?

Enter a bit of helpful data science. Thanks to something called Euclidean distance, we can systemically identify a player’s closest comparables around the league. Let’s start with a look at Anze Kopitar.

Anze Kopitar's closest offensive and defensive comparables around the league

Anze Kopitar's closest offensive and defensive comparables around the league

The above graphic is a screenshot of my visualization tool.

With the single input of a player’s name, the tool displays the NHL players who represent the five closest offensive and defensive comparables. It also shows an estimate of the strength of this relationship in the form of a similarity percentage.

The visualization is intuitive to read. Kopitar’s closest offensive comparable is Voracek, followed by Backstrom, Kane, Granlund and Bailey. His closest defensive comparables are Couturier, Frolik, Backlund, Wheeler, and Jordan Staal. All relevant similarity percentages are included as well.

The skeptics among you might be asking where these results come from. Great question.

A Brief Word on Distance

The idea of distance, specifically Euclidean distance, is crucial to the analysis that I’ve done. Euclidean distance is a fancy name for the length of the straight line that connects two different points of data. You may not have known it, but it’s possible that you used Euclidean distance during high school math to find the distance between two points in (X,Y) cartesian space.

Now think of any two points existing in three-dimensional space. If we know the details of these points then we’re able to calculate the length of the theoretical line that would connect them, or their Euclidean distance. Essentially, we can measure how close the data points are to each other.

Thanks to the power of mathematics, we’re not constrained to using data points with three or fewer dimensions. Despite being unable to picture the higher dimensions, we've developed techniques for measuring distance even as we increase the complexity of the input data. 

Applying Distance to Hockey

Hockey is excellent at producing complex data points. Each NHL game produces an abundance of data for all players involved. This data can, in turn, be used to construct a robust statistical profile for each player.

As you might have guessed, we can calculate the distance between any two of these players. A relatively short distance between a pair would tell us that the players are similar, while a relatively long distance would indicate that they are not similar at all. We can use these distance measures to identify meaningful player comparables, thereby answering our original question.

I set out to do this for the NHL in its current state.

Data

First, I had to determine which player statistics to include in my analysis. Fortunately, the excellent Rob Vollman publishes a data set on his website that features hundreds of statistics combed from multiple sources, including Corsica Hockey (http://corsica.hockey/), Natural Stat Trick (https://naturalstattrick.com) and NHL.com. The downloadable data set can be found here: http://www.hockeyabstract.com/testimonials. From this set, I identified the statistics that I considered to be most important in measuring a player’s offensive and defensive impacts. Let’s talk about offense first.

List of offensive similarity input statistics

List of offensive similarity input statistics

I decided to base offensive similarity on the above 27 statistics. I’ve grouped them into five categories for illustrative purposes. The profile includes 15 even-strength stats, 7 power-play stats, and 3 short-handed stats, plus 2 qualifiers. This 15-7-3 distribution across game states reflects my view of the relative importance of each state in assessing offensive competence. Thanks to the scope of these statistical measures, we can construct a sophisticated profile for each player detailing exactly how they produce offense. I consider this offensive sophistication to be a strength of the model.

While most of the above statistics should be self-explanatory, some clarification is needed for others. ‘Pass’ is an estimate of a player’s passes that lead to a teammate’s shot attempt. ‘IPP%’ is short for ‘Individual Points Percentage’, which refers to the proportion of a team’s goals scored with a player on the ice where that player registers a point. Most stats are expressed as /60 rates to provide more meaningful comparisons.

You might have noticed that I double-counted production at even-strength by including both raw scoring counts and their /60 equivalent. This was done intentionally to give more weight to offensive production, as I believe these metrics to be more important than most, if not all, of the other statistics that I included. I wanted my model to reflect this belief. Double-counting provides a practical way to accomplish this without skewing the model’s results too heavily, as production statistics still represent less than 40% of the model’s input data.

Now, let's look at defense.

List of defensive similarity input statistics

List of defensive similarity input statistics

Defensive statistical profiles were built using the above 19 statistics. This includes 15 even-strength stats, 2 short-handed stats, and the same 2 qualifiers. Once again, even-strength defensive results are given greater weight than their special teams equivalents.

Sadly, hockey remains limited in its ability to produce statistical measurements of individual defensive talent. It’s hard to quantify events that don’t happen, and even harder to properly identify the individuals responsible for the lack of these events. Despite this, we still have access to a number of useful statistics. We can measure the rates at which opposing players record offensive events, such as shot attempts and scoring chances. We can also examine expected goals against, which gives us a sense of a player’s ability to suppress quality scoring chances. Additionally, we can measure the rates at which a player records defense-focused micro-events like shot blocks and giveaways. The defensive profile built by combining these stats is less sophisticated than its offensive counterpart due to the limited scope of its components, but the profile remains at least somewhat useful for comparison purposes.

Methodology

For every NHLer to play 10 or more games in 2017-18, I took a weighted average of their statistics across the past two seasons. I decided to weight the 2017-18 season at 60% and the 2016-17 season at 40%. If the player did not play in 2016-17, then their 2017-18 statistics were given a weight of 100%. These weights represent a subjective choice made to increase the relative importance of the data set’s more recent season.

Having taken this weighted average, I constructed two data sets; one for offense and the other for defense. I imported these spreadsheets into Pandas, which is a Python package designed to perform data science tasks. I then faced a dilemma. Distance is a raw quantitative measure and is therefore sensitive to its data’s magnitude. For example, the number of ‘Games Played’ ranges from 10-82, but Individual Points Percentage (IPP%) maxes out at 1. This magnitude issue would skew distance calculations unless properly accounted for.

To solve this problem, I proportionally scaled all data to range from 0 to 1. 0 would be given to the player who achieved the stat’s lowest rate league-wide, and 1 to the player who achieved the highest. A player whose stat was exactly halfway between the two extremes would be given 0.5, and so on. This exercise in standardization resulted in the model giving equal consideration to each of its input statistics, which was the desired outcome.

I then wrote and executed code that calculated the distance between a given player and all others around the league who share their position. This distance list was then sorted to identify the other players who were closest, and therefore most comparable, to the original input player. This was done for both offensive and defensive similarity, and then repeated for all NHL players.

This process generated a list of offensive and defensive comparables for every player in the league. I consider these lists to be the true value, and certainly the main attraction, of my visualization tool.

Not satisfied with simply displaying the list of comparable players, I wanted to contextualize the distance calculations by transforming them into a measure that was more intuitively meaningful and easier to communicate. To do this, I created a similarity percent measure with a simple formula.

Similarity Formula.jpg

In the above formula, A is the input player, B is their comparable that we’re examining, and C is the player least similar to A league-wide. For example, if A->B were to have a distance of 1 and A->C a distance of 5, then the A->B similarity would be 1 - (1/5), or 80%. Similarity percentages in the final visualization were calculated using this methodology and provide an estimate of the degree to which two players are comparable.

Limitations

While I wholeheartedly believe that this tool is useful, it is far from perfect. Due to a lack of statistics that measure individual defensive events, the accuracy of defensive comparisons remains the largest limitation. I hope that the arrival of tracking data facilitates our ability to measure pass interceptions, gap control, lane coverage, forced errors, and other individual defensive micro-events. Until we have this data, however, we must rely on rates that track on-ice suppression of the opposing team’s offense. On-ice statistics tend to be similar for players who play together often, which causes the model to overstate defensive similarity between common linemates. For example, Josh Bailey rates as John Tavares’ closest defensive comparable, which doesn’t really pass the sniff test. For this reason, I believe that the offensive comparisons are more relevant and meaningful than their defensive counterparts.

Use Scenarios

This tool’s primary use is to provide a league-wide talent barometer. Personally, I enjoy using the visualization tool to assess relative value of players involved in trades and contract signings around the league. Lists of comparable players give us a common frame through which we can inform our understanding of an individual's hockey abilities. Plus, they’re fun. Everyone loves comparables.

The results are not meant to advise, but rather to entertain. The visualization represents little more than a point-in-time snapshot of a player’s standing around the league. As soon as the 2018-19 season begins, the tool will lose relevance until I re-run the model with data from the new season. Additionally, I should explicitly mention that the tool does not have any known predictive properties.

If you have any questions or comments about this or any of my other work, please feel free to reach out to me. Twitter (@owenkewell) will be my primary platform for releasing all future analytics and visualization work, and so I encourage you to stay up to date with me through this medium.

Cover photo credited to Jae C. Hong — Associated Press

What's a Corsi Anyway?: An Intro to Hockey Analytics by Guest User

By Owen Kewell, Scott Schiffner, Adam Sigesmund (@Ziggy_14), Anthony Turgelis (@AnthonyTurgelis)

Advanced statistics is an area that has recently started to pick up steam and shift into the mainstream focus in hockey over the past decade. Many NHL teams now employ full-time analytics staff dedicated to breaking down the numbers behind the game. So, what makes analytics such a powerful tool? Aside from helping you dominate your next fantasy hockey pool, advanced statistics provide potent insights into what is really causing teams to win or lose.

Hockey is a sport that has long been misunderstood. Its gameplay is fundamentally volatile, spontaneous and difficult to follow. There are countless different factors that contribute to a team’s chances of scoring a goal or winning a game on a nightly basis. While many in Canada would beg to differ, ice hockey still firmly occupies last place in terms of revenue and fan support amongst the big four major North American sport leagues (NFL, MLB, NBA, & NHL). As such, hockey is on the whole overlooked and is often the last to implement certain changes that come about in professional sports. The idea of a set of advanced statistics that would offer better insights into the game arose as other major sports leagues, starting with Major League Baseball, began looking beyond superficial characteristics and searching for the underlying numbers influencing outcomes. Coaches, players, and fans alike have all been subjected over the years to an epidemic failure to truly understand what is happening out there on the ice. This is the motivation behind the hockey analytics movement: to use data analysis to enhance and develop our knowledge of ice hockey and inform decision-making for the benefit of all who wish to understand the sport better.

Another barrier to progress in the field of hockey analytics is the hesitance of the sport to embrace modern statistics. Most casual fans are familiar with basic stats such as goals, assists, PIM, and plus/minus. But do these stats really tell the full story? In fact, most of these are actually detrimental to the uninformed fan’s understanding of the game. For starters, there is usually no distinction between first and second assists in traditional stat-keeping. A player could have touched the puck thirty seconds earlier in the play or made an unbelievable pass to set up a goal, and either way it still counts as a single assist on the scoresheet. Looking only at goals and assists can be deceiving; we need more reliable, repeatable metrics to determine which players are most valuable to their teams. Advanced stats are all about looking beyond the surface and identifying what’s actually driving the play.

So, what are these so-called “advanced stats”? Let’s start with the basics.

PDO: PDO (it doesn't stand for anything) is defined as a team’s save percentage (usually 5v5) + shooting percentage, with an average score of 1. If you only learn one concept, it's this one. It is usually regarded as a measure of a team or player’s luck, and can be a useful indicator that a player is under/over performing and whether a regression to the mean (back towards 1.000) is likely. This will not happen in every situation, of course, but watch for teams that have astronomic PDOs to hit a reality check sooner rather than later. Team PDO stats can be found on corsica.hockey’s team stats page.

Without trying to scare anyone, the Toronto Maple Leafs currently boast the 4th highest PDO at 101.85. To help ease your mind a bit, the Tampa Bay Lightning who are considered the team to beat in the East have the highest PDO of 102.35, and there's a decent gap between second place. They could be currently playing at a higher level than they really are as well, time will tell. 

Corsi: You may have heard of terms like Corsi and/or Fenwick being thrown around before. These are core concepts that are fundamental to understanding what drives the play during a game. Basically, Corsi is an approximation of puck possession that measures the total shot attempts for your team, and against your team, and stats can be viewed for Corsi results when a specific player is on the ice.

A shot attempt is defined as any time the puck is directed at the goal, including shots on net, missed shots, and blocked shots. Anything above 50% possession is generally seen as being positive as you are generating more shot attempts than you are allowing.

Corsi stats are typically kept in the following ways: Corsi For (CF), Corsi Against (CA), +/-, and CF%. An example of how CF% can be useful is when evaluating offensive defensemen. Sometimes, these players are overvalued because of their noticeable offensive production, while failing to consider that their shaky defensive game offsets the offensive value they provide. 

Fenwick: Fenwick is similar to Corsi, but excludes shot attempts that are blocked. Of course, with both of these stats, one should also take into account that a player’s possession score is influenced by both their linemates as well as the quality of competition (QoC). These stats can always be adjusted to reflect different game scenarios, like whether the team was up or down by a goal at the time, etc.

Measuring puck possession in hockey makes sense, because the team that has the puck on their stick more often controls the play. Granted, Corsi/Fenwick are far from perfect, and the team with the better possession metrics doesn’t always come out ahead. But at the very least, including all shot attempts offers a much larger sample size of data than traditional stats, and provides a solid foundation for further analysis.

Zone Starts (ZS%): this measures the proportion of the time that a player starts a shift in each area of the ice (offensive zone vs. defensive zone). A ZS% of greater than 50% tells us that the player is deployed in offensive situations more frequently than defensive situations. This is important because it gives us insight into a player’s usage, or in what scenarios he is normally deployed by his team’s coach. It also provides context for interpreting a player’s Corsi/Fenwick. Players who are more skilled offensively will tend to have a higher ZS% because they give the team a better chance to take advantage of the offensive zone faceoff and generate scoring opportunities. At the very least, ZS% can be used to get a glimpse at how a coach favors a player’s skillset.

Intro on 5v5 Isolated Stats and Repeatability

Often times, you will see those who do work with hockey analytics cite a player's stats solely while they are at even strength, or 5v5. Why? There's a few reasons.

First, 5v5 obviously takes up most of the hockey game. If a player is valuable to his team at 5v5, he will be valuable to a team for more time throughout the game, and this should be seen as a large positive. A player's power play contributions are certainly valuable to a team, but often over-valued. Next, the game is played very differently at different states. It would be wildly unfair to penalty killers to have their penalty kill stats included in their overall line, as more goals against are scored on the penalty kill, even for the best penalty killers. Separating these statistics helps provide a more complete picture into the player's skillset and value that they have contributed to their team. Finally, 5v5 stats are generally regarded as the most repeatable, partially due to the larger sample. While players' PP and PK stats can highly vary by year, 5v5 stats typically remain relatively stable (read more at PPP here if you like).

In addition, primary points (goals and first assists) have been regarded as relatively repeatable stats, so be on the lookout for player's that have many secondary assists to possibly have their point totals regress in the future (read more on this here).

Intro to Comparison Tools

One of the areas that has most benefited from hockey analytics is the domain of player comparison. One of the best and most intuitive tools is the HERO chart, as pioneered by Domenic Galamini Jr (@MiminoHero). The HERO chart is a quick comparison of how players stack up across ice time, goal scoring, primary assists, shot generation and shot suppression. At a single glance, we can get a sense of the strengths and contributions of different players. Here we compare Sidney Crosby to Connor McDavid:

hero.png

We can see that Crosby is better at goal-scoring and shot generation, while McDavid is better at primary assists and shot suppression.

To compare any two players of your choice, or to compare a player to a positional archetype like First-Line Centre or Second-Pair Defender, you can use Galamini’s website: http://ownthepuck.blogspot.ca/. These comparisons can be used to enhance understanding of a player’s skill set, inform debates, and evaluate moves made by NHL teams, among other uses.

All-3-Zone Data Visualizations:

While a HERO chart is an all-encompassing snapshot of a players contributions on the ice, the All-Three-Zones visuals are concerned with more specific aspects of the game. CJ Turtoro (@CJTDevil) created two sets of visuals using data from Corey Sznajder’s (@ShutdownLine) massive tracking project.

You can find both sets of visuals at the links below:

  1. https://public.tableau.com/profile/christopher.turtoro#!/vizhome/ZoneTransitionsper60/5v5Entries

  2. https://public.tableau.com/profile/christopher.turtoro#!/vizhome/2-yearA3ZPlayerComps/ComparisonDashboard

In the first set of visuals, you will find 4 leaderboards. Players are ranked in the 5v5 stats listed below.

  • 5v5 Entries -- How often players enter the offensive zone by making a clean pass to a teammate (Entry passes/60) or by carrying the puck across the blue line themselves (Carry-ins/60).

Other notes: The best way to enter the zone is to enter with possession of the puck (Entry passes + Carry-ins, as discussed above). These types of entries are called Possession Entries. Although other types of attempts are included in the leaderboard as well, players are automatically sorted by Possession Entries/60 because these alternative attempts are less than ideal. If you decide to change this, use the “Sort By (Entries)” filter to rank the players in other ways.

  • 5v5 Exits -- This is the same as 5v5 entries, except at the blue line separating the defensive zone from the neutral zone. Players are ranked based on how often they transition the puck from the defensive zone into the neutral zone either by carrying it (Carries/60) or by passing it to a teammate (Exit Passes/60).

Other notes: Like 5v5 entries, the best ways to exit the defensive zone are classified as Possession Exits. This is why players are automatically sorted by Possession Exits/60. Again, the “Sort By (Exits)” filter will let you change how the leaderboard is sorted.

  • 5v5 Entries per Target (5v5 Entry Def %) -- This stat measures defence at the blue line. It answers the question: When a defender is in proximity to an attempted zone entry, how often does he stop the attempt?

Other Notes: It is important to note that a “defender” is any player on the team playing defence (i.e. the team without the puck). Forwards are included in this definition of defender, but the best way to use this leaderboard is to judge defensemen only. This is why forwards are automatically filtered out of the leaderboard, but you can always change this using the filter if you wish.

  • 5v5 Shots and Passes -- Players are ranked based on how often they contribute to shots. Players contribute to shots by being the shooter or by making one of three passes immediately before the shot in the same way they earn points by scoring a goal or by making one of two passes immediately before the shot was taken.

If you want a closer look at certain groups of players, the filters allow you to look at players who play certain positions (forwards/defencemen) and players who play on certain teams. In the screenshot below, for example, I filtered the 5v5 Entries leaderboard to see what it looks like for forwards on the Oilers:

entries:60.png

You can use these leaderboards to judge offence (5v5 entries, 5v5 shot contributions), and defence (5v5 exits, 5v5 Entry Def %). Ultimately, these four leaderboards will help you identify the best and worst players in these areas.

In order to focus on one or two players, you should use the second set of visuals: The A3Z Player Comparison Tool. While HERO charts allow for player comparisons in stats collected by the NHL, this visualization was designed to help you judge players based on their performance in several stats from the tracking project. Instead of standard deviations, however, the measurement of choice in this comparison tool is percentiles. So keep in mind that “100” means the result is better than 100% of the other results. You can view a players results in two 1-year windows and one 2-year window, covering the 2016-17 season and the 2017-18 season. Here’s a two-year snapshot of how Erik Karlsson and Sidney Crosby rank in some of these key stats:

a3z.png

You probably noticed that the stats for forwards and defencemen are slightly different. The only difference is that defencemen have three extra categories, which measure their ability (or lack thereof) to defend their own blue line (i.e. their 5v5 Entries per Target, as discussed in the previous section). You may have also noticed some useful information hidden beneath each players name, including the numbers of games and minutes that have been tracked for the player. Although the numbers in the screenshot above are from two seasons, another thing to keep in mind is that you can also compare a players development over two seasons by looking at their stats in one-year windows. To see what I mean, take a look at Nikita Zaitsev’s numbers in two consecutive seasons:

zaitsev.png

Visualizing the dramatic fall of Nikita Zaitsev in this way is an excellent starting point for further analysis. Likewise, you can also compare two different players in the same season or over two seasons. This is, after all, a Player Comparison Tool. Other common uses for both sets of A3Z visualizations are to identify strengths and weaknesses of certain players, to evaluate potential acquisitions, to design the optimal lineup for your favourite team, and many more.

Of course, there are countless other useful terms and concepts to consider in analytics, like relative stats, shot quality, and expected goals (xG), which we’ll be touching upon more in-depth in future articles. If you’re interested in advanced stats and would like to learn more, we’ll be putting out more content on exciting topics in hockey analytics over the coming months, so stay tuned.


Keep up to date with the Queen's Sports Analytics Organization. Like us on Facebook. Follow us on Twitter. For any questions or if you want to get in contact with us, email qsao@clubs.queensu.ca, or send us a message on Facebook.

Advanced Baseball Stats for Casual Baseball Fans by Guest User

By Anthony Turgelis

We’ve all seen Moneyball. If you haven’t seen Moneyball, go see Moneyball, it’s on Netflix. The ‘Moneyball Revolution’ within baseball has shaken up the game, and changed the way that executives in baseball are looking at the game.

This will be an intro to some of the stats, metrics, and concepts that these executives are looking at. The goal here isn’t just to define what these things are, but rather to show how they can be used as tools of evaluation, to confirm the eye-test, or to just enhance the experience of the game. You might even end up sounding smart in front of your friends. When writing this article, I tried to include everything I wish I knew when first diving into the world of baseball analytics.

To avoid boring you with the history of how this Moneyball Revolution came to be, I’ll only drop one name that you should be familiar with - Bill James. Bill can be credited for being the pioneer of statistical analysis within baseball, as in the 1970s he was one of the first to publish this type of work that would be seen by a wide audience. Many people found his work fascinating, and attempted to replicate it, and - to make a long story short - after 30 years of this, the MLB finally took notice and the Moneyball Revolution began.

Concepts/Terms to Know:

The majority of these terms and concepts have been taken from Fangraphs, which is a site to find many advanced baseball stats and analysis. Links on where to find these concepts/stats will be provided.

Fielding Independent Pitching (FIP) - FIP is an adjusted Earned Run Average (ERA, or runs allowed by a pitcher excluding errors) metric that attempts to quantify what a pitcher’s value would be if they stripped out the defense component of the game. FIP assumes that all balls that are hit into play are given league average results on whether they fall for a hit or not. This way, a pitcher is not penalized for having a bad defense behind him, which certainly would affect their pitching results, and their ERA as a result. FIP is considered predictive as it has higher correlations across seasons than ERA, which makes sense considering it measures things that the pitcher can control and not things like defense which can fluctuate by game and by season. It is adjusted so that the league-average FIP is the same as the league-average ERA. This is done so that it can be easily compared to a player’s ERA to see if they are over/under-performing their FIP, and whether there may be any regression available for the player. There are cases of players who can consistently outperform their FIP numbers, such as Marco Estrada who in 2015-16 was elite at inducing weak contact (which can be considered a skill), so FIP assuming league-average results on balls-in-play would likely paint him as less effective than he actually is. On the other hand, his ERA did balloon to 4.98 in 2017 after significantly outperforming his FIP the previous two years, so the regression bug may have actually hit him as well.

FIP can be found on Fangraphs pitcher pages, such as Marco Estrada’s, next to ERA, where you will find his 2017 FIP to be 4.61.

Batting Average on Balls in Play (BABIP) - BABIP is a player’s batting average on only balls that were put into play, and the average is roughly .300 for both hitters and pitchers. The reason why this is a very important stat, is that it tends to stabilize after 800 balls in play. This means that if a player is having a stretch of months (or even a whole year) where they are achieving a much higher/lower BABIP than league average, and their career average, they are likely due for some regression as they have likely been getting lucky/unlucky on the results of the balls they have put into play. It’s worth noting that better hitters will likely have higher BABIPs, and vice-versa, and some players are able to sustain high BABIPs throughout their career without regression. The 2017 Toronto Blue Jays hitters ranked dead last in the entire MLB in BABIP in 2017, which can be seen as a source of optimism that they may achieve better results on their balls in play in 2018.

BABIP can be found on Fangraphs pitcher/batter pages, such as fringe prospect Dwight Smith Jr’s, who rode a .588 BABIP in 2017 to achieve his .370 batting average, which was less impressive and likely luck-driven given his ridiculous BABIP, and so he still earned a demotion and will likely not get an early look to crack the 2018 team.

Hit Probability - To temporarily stray from Fangraphs, Hit Probability is a metric that was introduced by Statcast at the beginning of the 2017 season to estimate the likelihood that a ball-in-play will be a hit, based on its launch angle and exit velocity compared to similarly hit balls in the past. Similarly to FIP, it attempts to negate the effects of defense and the ballpark on players who may have high percentage hits robbed by star outfielders making unlikely plays, or getting credit for many weak hits that likely would not be repeated. I did an analysis on how the 2017 Blue Jays were being affected by luck based on their hit probabilities, and throughout the season I saw players regress to what their averages were expected to be based on their Hit Probability numbers. The most extreme case was Devon Travis who had a cold start but still had high aggregated Hit Probability numbers but who, as the season progressed, positively regressed to the expected level. The quarter season report can be found here, and the mid-season report can be found here.

Hit Probability statistics can be found on Baseball-Savant here, where you can select any game and see the hit probabilities for all balls in play for that game.

Weighted Runs Created + (wRC+) - wRC+ is an attempt to quantify a player’s total offensive output into one total stat, based on the value of their contributions, after park adjustments. It uses the concept of Weighted On Base Average (wOBA) which simply gives the run value of each plate outcome. For example, it finds that triples contribute to runs roughly twice as often as a single, so a triple would be worth double the value of a single in this calculation. After doing this, you can find out the value of runs created by each player’s offensive outputs. wRC+ is a rate statistic, so it is very easy to be used even in smaller samples to see how a hitter has been performing. It is one of the best tools to use when evaluating a hitter’s offensive abilities. The league average wRC+ is 100, and each point above 100 is indicative of one percentage above league-average.

It can be found on the batter pages on Fangraphs, such as Mike Trout’s, who was the 2017 leader at 181 wRC+, beating Aaron Judge by 8 points even with 19 less home runs.

Park Adjustments - No Two Parks are The Same:

To state the obvious, no two MLB ballparks are the same. The most noticeable difference is obviously the different dimensions, but additionally there are many other factors at play such as weather and other environmental factors. As a result, there tend to be plenty of differences in player performance at different parks, and adjustments are calculated to reduce the effects of these parks as best as possible. They typically are separated for left and right-handed batters, since parks are not always symmetrical, they may favour one-sided batters over another.

Colorado’s Coors Field is regarded as the extreme case of a ‘Hitter’s Ballpark’ - hitters tend to generally perform well there due to the high altitude and large outfield so batters can expect more balls in the outfield to fall for hits. Conversely, AT&T Park in San Francisco is regarded as the largest case of a ‘Pitcher’s Ballpark’ due to its high walls and damp air. Rogers Center in Toronto is ranked as the 8th best ballpark for hitters. Four out of five ballparks in the AL East are considered to favour the hitter over the pitcher, so that could be one of the reasons why a team based in Toronto fails to attract premium free agent pitchers.

The War on WAR:

If you only have time to learn about one advanced stat in baseball, Wins Above Replacement (WAR) is the one to go with. WAR is an attempt to quantify the overall value of a player’s contributions into one easy number. It simply could be put as: The number of wins that you can expect your team to add while employing the player, compared to a different player that would be easily acquired from the minor-leagues or a team’s bench.

WAR is a counting stat and is based on what happened, rather than what will happen in the future. If an MVP-calibre player only played 20 games, they may have a lower WAR than many inferior players, due simply to the fact that they didn’t play enough games to accumulate a high WAR total.

Fangraphs goes into more details of what exactly goes into the WAR stat for hitters, but essentially it is the total value of runs that a batter contributes to the team in the areas of: hitting, baserunning, fielding, divided by how many wins a team can be expected to win with those runs added (Runs/Win generally fluctuates by year but is ~10). It is then adjusted by position (For example: CF is much harder to play than 1B, so they are credited accordingly - more here), adjusted by ballpark, and adjusted to consider the ‘Replacement Level’ player and how much more/less valuable that player is to this imaginary player.

For Pitchers, it is much more complicated, so it’s best to outline the two different WAR stats that are most commonly referenced. First, there’s Fangraphs WAR, commonly referred to as fWAR. fWAR uses Fielding Independent Pitching (FIP) during their calculations, instead of ERA. Recall that FIP is generally regarded as a more predictive stat than ERA, so fWAR could be better used as a tool to project future pitching performance. Conversely, Baseball Reference uses ERA when calculating their bWAR stat. ERA is based on what has actually happened, and could be influenced by team defense among other external effects. These effects are variable by game and are out of the pitcher's control, so this should be seen as more of a ‘what happened in the past?’ stat, rather than a ‘what should I expect in the future?’ stat.

Conclusion

I hope that this article has given you an introduction to some tools to enhance your viewership of baseball. These tools were selected as stats that may challenge how the game is traditionally viewed. Player’s are often over/undervalued by fans since traditional metrics such as batting average will never paint the full picture of their contributions. Hopefully the concepts learned today will allow you to form more complete opinions on player’s teams while enjoying the games.

Keep up to date with the Queen's Sports Analytics Organization. Like us on Facebook. Follow us on Twitter. For any questions or if you want to get in contact with us, email qsao@clubs.queensuca, or send us a message on Facebook.