Jump to content

Statistics


AlNFL19

Recommended Posts

2 minutes ago, LETSGOBROWNIES said:

I get frustrated by the simple takes tbh.

It was awful when Sashi was blessing us and people couldn’t understand basic analytical concepts despite plenty of actual data being presented to them.  

"Statistics don't encompass everything doe".

Yeah no **** dipstick. All models are wrong, some are useful. I don't need to out predict an all-knowing football demon for stats, I need to beat a hungover intern who's been watching film for 18 of the last 24 hours for the stats to be useful.

  • Like 1
Link to comment
Share on other sites

4 minutes ago, ramssuperbowl99 said:

"Statistics don't encompass everything doe".

Yeah no **** dipstick. All models are wrong, some are useful. I don't need to out predict an all-knowing football demon for stats, I need to beat a hungover intern who's been watching film for 18 of the last 24 hours for the stats to be useful.

Exactly.

I equate it to taking a multiple choice test where you only know a bit of the information.  Analytics aren’t going to give you the answers, but they’ll help eliminate the worst possible choices and help identify the better choices.  You’ll still use your knowledge base to select the right answer if you want to be successful, but they stack the deck in your favor.

  • Like 2
Link to comment
Share on other sites

6 minutes ago, ramssuperbowl99 said:

Except not really. It entirely depends on what you are measuring, and we're adults so we can choose to measure things that are both predictive and reproduceable.

For example, you could measure a LBs tackles or start doing a PFF style play grading to figure out how well they can play the run. Or, you could stick a GPS in their pad and measure their reaction times starting at the snap, their max speed, etc. etc. Then you do that for a few years, find that there's a strong correlation (since linebackers that have great get-off and top end speed are going to be good against the run #factsonly), then use the stuff that's easy to measure reliably to make decisions.

Well, yeah it depends on what you're measuring but even your example creates a plethora of alternate scenarios that could skew results. 

What if the LBs read for that play isn't the RB? What is his responsibility that play? Does he have primary man coverage, zone coverage? Is he assigned to the RB? TE? Is his first read the OG? C? Does he have a backup responsibility within his zone? Primary? 

Those are just a few examples of things that could impact his "reaction" time to the play at hand. If his primary read has nothing to do with the RB, obviously he's going to react differently than if the RB was his primary read. 

There are a lot of mitigating factors that could easily influence any one given play. Almost nothing in Football is truly repeatable. 

Link to comment
Share on other sites

3 minutes ago, LETSGOBROWNIES said:

Exactly.

I equate it to taking a multiple choice test where you only know a bit of the information.  Analytics aren’t going to give you the answers, but they’ll help eliminate the worst possible choices and help identify the better choices.  You’ll still use your knowledge base to select the right answer if you want to be successful, but they stack the deck in your favor.

That's a good way of looking at it, I like that. 

  • Like 3
Link to comment
Share on other sites

Just now, theuntouchable said:

Well, yeah it depends on what you're measuring but even your example creates a plethora of alternate scenarios that could skew results. 

What if the LBs read for that play isn't the RB? What is his responsibility that play? Does he have primary man coverage, zone coverage? Is he assigned to the RB? TE? Is his first read the OG? C? Does he have a backup responsibility within his zone? Primary? 

Those are just a few examples of things that could impact his "reaction" time to the play at hand. If his primary read has nothing to do with the RB, obviously he's going to react differently than if the RB was his primary read. 

There are a lot of mitigating factors that could easily influence any one given play. Almost nothing in Football is truly repeatable. 

Yeah, so you're going to get different sets of results, and you can look at the assignment and only review relevant data. People do this literally all the time in tons of disciplines that aren't football. It's really not hard to set exclusion criteria.

Link to comment
Share on other sites

9 minutes ago, LETSGOBROWNIES said:

Exactly.

I equate it to taking a multiple choice test where you only know a bit of the information.  Analytics aren’t going to give you the answers, but they’ll help eliminate the worst possible choices and help identify the better choices.  You’ll still use your knowledge base to select the right answer if you want to be successful, but they stack the deck in your favor.

Ideally they can do a lot more than that, but yeah if you just have basic regression kind of stuff, you can validate decision making with them.

Hopefully, they should be forcing you to ask better questions. Another common anti-stats trope is "I don't need stats to tell me JJ Watt is good", or something like that. Yeah no ****. If you watch football and the most advanced thought that crosses your brain is "JJ Watt is good", there's bigger cognitive issues that you have. Better questions might be "how much better is JJ Watt than an average NFL player?" "what physical and technical traits make JJ Watt so much better?" "could we have foreseen JJ Watt becoming this good, and if so, when and with what confidence?"

Stats lead to a different, better way of thinking and judging decision making. Which is why crotchety old guys and Football Guys hate them.

  • Like 3
Link to comment
Share on other sites

8 minutes ago, ramssuperbowl99 said:

Yeah, so you're going to get different sets of results, and you can look at the assignment and only review relevant data. People do this literally all the time in tons of disciplines that aren't football. It's really not hard to set exclusion criteria.

Sure, but we aren't privy to exactly what responsibility a player had. We can make assumptions and some of them will be very good assumptions but some will also be bad. 

  • Like 1
Link to comment
Share on other sites

17 minutes ago, ramssuperbowl99 said:

All models are wrong, some are useful.

You sound just like the PK/PD guys I work with and they're modeling humans too.
One question I have is around the source data for the analytics - are there any standards, any validation, any check points for the quality of the data in terms of how it was collected, stored and/or manipulated ? How do we know the GPS is accurate and how accurate is it ? Down to the nearest 1 second or .1 second or .001 seconds and is it the same across the league ?

  • Like 1
Link to comment
Share on other sites

6 minutes ago, Shanedorf said:

You sound just like the PK/PD guys I work with and they're modeling humans too.

That's my job.

6 minutes ago, Shanedorf said:

One question I have is around the source data for the analytics - are there any standards, any validation, any check points for the quality of the data in terms of how it was collected, stored and/or manipulated ? How do we know the GPS is accurate and how accurate is it ? Down to the nearest 1 second or .1 second or .001 seconds and is it the same across the league ?

You'd validate the data first. So take a high res camera and calculate everything, then see if the results are the same with the GPS.

Edited by ramssuperbowl99
  • Like 1
Link to comment
Share on other sites

5 minutes ago, Shanedorf said:

You sound just like the PK/PD guys I work with and they're modeling humans too.
One question I have is around the source data for the analytics - are there any standards, any validation, any check points for the quality of the data in terms of how it was collected, stored and/or manipulated ? How do we know the GPS is accurate and how accurate is it ? Down to the nearest 1 second or .1 second or .001 seconds and is it the same across the league ?

I think you look for historical accuracy and track records (in part anyway).

Examples such as Math Rushers can take the applicable data from any player in history and see if it checks out.  Once you have a baseline, you can look for potential issues and adjust.

Link to comment
Share on other sites

5 minutes ago, LETSGOBROWNIES said:

I think you look for historical accuracy and track records (in part anyway).

Examples such as Math Rushers can take the applicable data from any player in history and see if it checks out.  Once you have a baseline, you can look for potential issues and adjust.

The ideal answer is buy this from a company that's already done all the legwork because you're lazy and validation sucks. The MLB did that with TrackMan for spin rates and whatnot.

Link to comment
Share on other sites

On 7/27/2019 at 7:21 AM, TheFinisher said:

The issue with analytics in football is you have to treat every snap as an event, and statistics depend on outcomes being a repeatable, reliable measurement.  The problem is no two snaps are going to result in comparable outcomes.  There's too much variability for all 22 players on the field in any given snap, and the margins on how they effect a given play are so narrow (literally a game of inches, good luck accounting for that) you wind up with meaningless results.  But that won't stop services like PFF and Football Outsiders duping customers into thinking they're providing actual analysis.

 

And these are just some of the pure measurement problems with analytics in football.  There's an entirely other can of worms when you get into assumption of what a successful snap looks like for any given player.  The guys grading snaps for services like PFF have no idea how players are being coached and what their jobs actually are on any given snap.  They can't determine what a win/loss is for a play without knowing the play call and what that player is tasked with doing on that play, only the coaches and players of the team know that.

 

I posted something a while ago about the viability of a WAR statistic for the NFL. PFF has started to use one and I've read a university paper on one. I think this post sums up the consensus from people here as well as mine- there are way too many variables to have an accurate WAR statistic.

I think that translates to advanced grading as well, although probably less so. The issue there is that none of these 'experts' know exactly what a player should be doing on a given snap. You can't grade when you don't know what the desired outcome is. Good post.

I'll trust tangible, quantitative stats, although the amount of weight I put into them is relatively low. It is close to 0 without context. I do put about 0 weight into qualitative stats, but they are fun to look at and try to understand. I've emailed advanced statistics sites before to try to get better explanations on how their grading works, and they've actually responded. Gives more context.

  • Like 1
Link to comment
Share on other sites

@ET80 here's a real word example of how this could function. The Red Sox drafted an otherwise completely unassuming Mookie Betts because of neurological data.

https://www.bostonglobe.com/sports/2015/02/18/neuroscouting-may-give-red-sox-heads-prospects-potential/EFBHR3zNdThk1NboRpNMHL/story.html

Quote

“Through further evaluations and some of the proprietary testing we developed over the years, it was really clear that this kid not only had the speed, not only had the athleticism, not only had the arm, not only had the feel for the game, but also was pretty elite in his hand-eye coordination, his reaction time, and the way his mind worked as well.”

What, exactly, does Epstein mean about the workings of Betts’s mind?

“I can’t talk about that stuff,” he laughed, “because then I’d have to kill you.”

 

That “stuff,” according to several sources familiar with the Sox’ scouting efforts with Betts, was a new effort in 2011 to have prospects take part in neuroscouting tests.

For years, pitch recognition has been a great separator when scouting amateur players. Given that a high schooler might never see a fastball that cracks 90 miles per hour or be challenged by a legitimate major league breaking ball, there is significant guesswork in determining whether apparent bat speed will translate to production against top pitching in the pros.

In an attempt to crack that mystery, the Sox started instructing their area scouts to put potential draftees through a series of computer exercises meant to measure reaction time to pitches. Betts became a heralded part of that pilot program.

“I missed my lunch period because I was doing neuroscouting,” recalled Betts. “[Watkins] just said, ‘Do this, don’t think about the results.’ I did what I could. It was just like, a ball popped up, tap space bar as fast as you could. If the seams were one way, you tapped it. If it was the other way, you weren’t supposed to tap it. I was getting some of them wrong.

“I wasn’t getting frustrated, but I was like, ‘Dang, this is hard.’ ”

Despite that, Betts tested near the top of the charts for the 2011 draft class. The Sox didn’t necessarily know what that meant — there were no data to suggest a correlation between top performance on the simulation and actual in-game abilities — but Betts’s scores raised eyebrows to the point of creating some fascination when the Sox selected him in the fifth round in 2011.

Half of getting stats to inform decision making is finding something you can measure that's reliable and predictive. The Red Sox wanted to know about Mookie's reaction abilities, so they basically made a model of it and saw what the data looked like.

 

You could do the same thing with a QB. Have a WR flash up on the screen either with a DB all over him, or with separation, and have him hit one button for next read, and another button for throw. Measure the reaction time and see what percentage he gets right. Maybe do some where there is also a lineman that pops up near the bottom of the screen.

So often you have NFL teams talk about how hard it is to find a QB that can process information quickly. This simple experiment may or may not work, but it's a different way of thinking about answering the question instead of exclusively relying on college performance.

Edited by ramssuperbowl99
  • Like 4
Link to comment
Share on other sites

7 minutes ago, ramssuperbowl99 said:

@ET80 here's a real word example of how this could function. The Red Sox drafted an otherwise completely unassuming Mookie Betts because of neurological data.

https://www.bostonglobe.com/sports/2015/02/18/neuroscouting-may-give-red-sox-heads-prospects-potential/EFBHR3zNdThk1NboRpNMHL/story.html

Half of getting stats to inform decision making is finding something you can measure that's reliable and predictive. The Red Sox wanted to know about Mookie's reaction abilities, so they basically made a model of it and saw what the data looked like.

 

You could do the same thing with a QB. Have a WR flash up on the screen either with a DB all over him, or with separation, and have him hit one button for next read, and another button for throw. Measure the reaction time and see what percentage he gets right. Maybe do some where there is also a lineman that pops up near the bottom of the screen.

So often you have NFL teams talk about how hard it is to find a QB that can process information quickly. This simple experiment may or may not work, but it's a different way of thinking about answering the question instead of exclusively relying on college performance.

I knew I loved you, but I didn’t know I loved you this much.

I’ll break the news to my wife.

  • Like 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...