Match outcomes for Premier League referees for 2010/11 – 2015/16

Referees outcomes

The graph above shows the match outcomes (home win, draw and away win) as a percentage of the total number of Premier League matches officiated by that ref. The graph is for every ref in the seasons from 2010/10 to 201o/15/16. That is 2,660 matches. I got the data from here.

The dashed lines show the home win, draw and away win percentages across the 2,660 match sample so you can compare each ref with the average across the time period.

The number in brackets after each ref’s name is the number of games tat they have refereed.  I’d say that it’s only worth trying to draw firm conclusions from those refs who have refereed more than 50 matches as that gives enough data to smooth out any outliers.

Those refs near the top who have officiated at over 100 matches seem quite consistent in their results profile, apart from:

  • A Taylor who seems to give significantly fewer home wins, and signficantly more away wins.
  • L Mason who gives more away wins and fewer draws than average.
  • N Swarbrick who appears to give a lot fewer home wins than average.

To really understand how significant these deviations are I’d need to do a standard deviation calculation weighted by the number of matches officiated at. But I didn’t have time for that.

I was also surprised to see that more matches end up as an away win than as a draw. I thought it would be the other way around.

Why have I done this? Mainly because I can to hone my Excel skills. Also, if you are prone to the odd wager on the football this is information that might help you.

An Excel nerd point: the use of the horizontal dot plot required some significant wrangling as Excel does not have this type of chart as an option. Look here for the instructions on how to do it.

Advertisements

Another terrible election graphic

evolve politcs donut chart

After blogging yesterday about the terrible election graphic of the map doing the rounds on social media, a new terrible graphic has appeared on social media. It is on the Evolve Politics Facebook page.

What’s wrong with it?

First, it is using a form of pie chart instead of a bar chart, which would be a much better way to represent the data (but that’s another story).  The main problem is the metric it measures is not at all fit for purpose. 

The implication from the graphic is that compared to Cameron’s and Blair’s first year in the leadership job, Corbyn did better. What we have here is a false comparison. As with my last post I’m not going to comment on whether the results are vindication of Corbyn’s leadership or not, I will solely restrict myself to the good use of data.

Before I explain the false comparison I want to point out it says 47% of councillors won – that is nonsense as most Labour councillors defending their seats won. I am assuming it means”47% of council seats were won.

So why is the comparison false?

At each set of local elections only some councils are up for election and in many cases they are not “all out” elections, but only some seats are up. If you want to see the council electoral cycles go to the government website here for a list.

So when council seats are up for election across the country they could be in more Labour voting areas, or conversely in more Conservative voting areas. Hence comparing years like this is meaningless as it is not clear that the electoral cycles are the same, and in any event there have been significant changes in electoral cycles (and types of councils) since 1995.

The percentage of councillors elected for any political party is therefore not a good measure of success (or failure). And neither is  it useful as a comparison over different years.

The best metric to use to determine overall performance is the change in seats. In this election Labour has a net loss of 18 seats and the Conservatives had a net loss of 47 seats. I will leave it to you to interpret this as good, bad or indifferent for Corbyn.  For more info on the results go to the Guardian website here.

Alternatively you can look at National Equivalent Vote Share to get a good idea of how a party has performed. This article by Tony Travers of the LSE explains it very well. Note the article was written before the elections so it not biased for or against any party.

Here are some pointers for you all when looking at data and graphics (political or otherwise) to help you determine the useful from the crud:

  1. The graphic should have its provenance on it. I always put a little logo with my Twitter handle on it on my graphics. That way people know who has done it and can track me down if they think I’ve got it wrong or want to ask questions. Not having the provenance on it doesn’t mean the graphic isn’t correct, but it should set alarm bells ringing.
  2. The graphic should say where the data was sourced from, with a link to the data if at all possible. Again I always try to put the data source on it on my graphics. That way people can track down the data and check my graphic is correct. If you cannot track down the data that created the graphic, then be very wary.
  3. If some significant calculations or data analysis is required then there should be a link to the spreadsheet or other analysis that was done so it can be checked. Remember the study by highly respected academics Reinhart and Rogoff, that purported to prove austerity worked; it turned out to have a spreadsheet error that made their conclusions invalid. It was only because they made their spreadsheet available that this error was spotted.
  4. If the graphic purports to compare things – ask yourself is it a fair or false comparison?
  5. If the graphic proves what you want it to, remember confirmation bias and ask yourself if you are believing what you want to believe. Ask yourself if it proved the opposite of what you wanted, what would be your criticisms of the graphic. Then ask why these criticisms aren’t valid even though the graphic proves what you want.
  6. It if looks too good to be true, it might well be.

We do ourselves no favours if we fool ourselves. The title of this blog is a nod to something the late, great physicist Richard Feynman once said, and another wise thing he said is:

the first principle is that you must not fool yourself – and you are the easiest person to fool

So be careful around those graphics and don’t fool yourself.

Interactive council cuts comparator

A few days ago I posted a graph of council cuts versus deprivation. I had a lot of interest in the graph with people asking me “where is council X on the graph and can I compare it to another council?”

So I’ve done a nifty little amendment to the graph to make it interactive so you can select two councils to compare. See below for a few of examples of comparisons between councils.

Birmingham v Oxfordshire (the county where the PM’s Whitney constituency is)
imageNottingham v Oxfordshire
image

If you want to choose two councils of your own to compare, you should download my spreadsheet and when you open it you will come to a page with two drop down boxes above the graph. Select your two councils and the graph will change to show the two selected councils. You can download my spreadsheet by clicking here.

Nerd note: In the past I have embedded Excel files in this blog using OneDrive. However the data validation drop boxes I used to select the two councils on the graph does not work on the online version of Excel. Hence I’ve had to settle for you downloading the Excel file. If data validation did work in the online version of Excel I would have gone for an embedded version of the spreadsheet.

Why was 2014 warm AND wet?

This well written post explains why 2014 was was and wet. It concludes that the UK’s long term weather patterns are going to be more of the same.

Protons for Breakfast Blog

Colour-coded Map of UK showing how each region of the UK exceeded the 1981-2010 average temperature. Crown Copyright Colour-coded Map of UK showing how each region of the UK exceeded the 1981-2010 average temperature. Crown Copyright

2014 was the warmest year in the UK ‘since records began’ – and most probably the warmest since at least 1659. You can read the Met Office Summary here

The World Meteorological Organisation (WMO) also report that 2014 is likely to have been the warmest year in Europe and indeed over the entire Earth for at least 100 years.

This was briefly ‘news but somehow this astonishing statistic seems to have disappeared almost without trace.

In fact there are three astonishing things about the statistic

  • Firstly – we know it, and it is likely to be correct.
  • Secondly – the warmest year was ‘warmer all over’ but did not include the ‘hottest month’.
  • Thirdly- the warmest year was also overly wet – both in the UK and world wide.

This article is about why

View original post 554 more words