Posting from Perth, Western Australia.
I thought you might like to know that Pro5, the organisation charged with administering match-funding on behalf of the DfE has just sent us an email to confirm that the initiative has officially been wound up as of 31st March 2014.
They also added the following:
The DfE have provided some key information below, based on an analysis of the match-funding data from September 2011 to the end of October 2013 which showed that:
- A total of £23.7 million match-funding (including VAT) was claimed by around 14,300 schools, 80% of eligible schools.
- Approximately 84% of eligible schools with key stage 1 pupils claimed match-funding (around 13,900 schools).
- Around 1,260 schools with key stage 2 pupils became eligible for match-funding from January 2013. Of these around 390 claimed match-funding (31%).
- Most of the funding (95%) was used to purchase phonics products rather than training.
We would like to thank you all for your support and hard work from the start of the process in 2011 to date and for making the initiative a success. We would appreciate any feedback (good, bad or indifferent) that you have about the initiative including the procurement and the operational management of the contracts. Please reply by return by Tuesday 22nd April.
Nick Gibb could have steered the enterprise towards training had he understood more clearly what the problem is and how it needs to be dealt with. He was much too timid and too willing to be guided by his counsels in the DfE and others. as a result, he missed a fine opportunity to make the kind of difference Michael Gove has recently been speaking about: the chance to eradicate the blight of illiteracy within a lifetime. A different approach would have privileged the introduction of RCTs to find out which phonics programmes are likely to yield the most promising results and fund training on the back of the outcomes.
As it is, although those schools that have spent their funding on Dandelion Readers or Sounds-Write readers will find that that their understanding of phonics teaching has been enhanced, the crucial first step should have been training.
The 2012/3 NFER evaluation of the Y1 Phonics Screening Check found amongst a survey of nearly one thousand schools only three that had used match-funding to provide staff training. Ironically the same report made clear the widespread confusion and misunderstanding about phonics amongst the teachers surveyed. Improvements in teachers’ understanding were clearly urgently needed but these needs were not being recognised by the teachers themselves or by their headteachers. I take my hat off to this blog which has, from the start, warned about the danger of the disconnection between training and products that was built-in to the structure of the match-funding scheme.
The good news is that the information collected incidentally in the Matched Funding initiative can significantly advance the UK government’s commitment “to eradicate the blight of illiteracy.” The data haven’t been analyzed, but the analysis is very easy. The ESPO database tells us how schools used the money to buy training/and or materials. The RAISE database tells us each school’s Yr-1 (and Yr2) Phonics Screening Check results. All that needs to be done is to put the two data bases together.
The analyses that have been done at the LEA level indicate variability at both LEA and school levels. What one would be looking for in the analysis at the school level is REPLICABILITY. That is, we know that some schools are “doing things right” because all or nearly all of their kids passed the Check. Other schools aren’t. But we don’t know, other than anecdotally, what “things” replicably work and which don’t—Which training and which materials made any difference. You could likely determine the replicability by eyeball, but no statistics fancier than correlation coefficients should be necessary.
One would think that the government and “everyone else” would be interested in what they got out of the money that was spent. When all that needs to be done to find out is to “look at the data” wouldn’tcha think that will happen?
Not necessarily. But that’s a whole nother story..
Hi Mike and thanks for your comment.
You're right! It is so frustrating to find so many teachers and HTs unaware of what they don't know: a truly Rumsfeldian conundrum!
I've just arrived back in UK and have found a copy of your Phonics and the Resistance to Reading waiting for me. I'm already on Ch 3.
Best,
John
Hi Dick,
Thanks once again for taking the trouble to comment.
Actually, I'm glad you made the point about putting the two data bases together. The government are requesting feedback and, as well as handing out some thoroughly well deserved kicks and buffets for their smug, self congratulatory hauteur, I shall impress on them your suggestion.
Do I think they'll take notice? Do I 'eckkers like!
Btw, I saw your comments of Dorothy B's blog. I did ask her some time ago now to look at the data David P and I solicited on spelling through Key Stage 1 but she said that unless there were proper control groups… I take her point but when you've got data on 1,607 children from three different parts of the country, you'd think it might count for something. Pity that it's left to us to instead of being initiated by government or the universities.
Anyway, good to hear from you again. Stay well!
John
Do I 'eckers like! too. (Had to look that one up) but who knows?
Why don't you just ask that the two data bases be accessible to "qualified researchers"? RAISE data are already available to "School Partners." The UK govt may already have protocol for accessing public data for research purposes. If not, the US protocol for accessing data bases like the Early Childhood Longitudinal Study is readily applicable.
Proponents of "Randomized Control Trials" have over-sold an over-simplified methodology. "Schooling" is such a large scale operation that one can draw a large-number-of-large-random- samples to test any finding. If there is any doubt about the replicability, draw another sample–until you get tired.
The Screening Check database readily permits such "randomized control."
The spelling data that you and David have reported is indeed impressive. However, the Screening Check database has advantages that you didn't have:
–The Check is a superior psychometric instrument for the purpose
–The analysis need not be restricted to one programme
–The number of students, schools, and LEA's is hugely larger
–Information on bio-social characteristics is much more extensive than the "gender" variable that you were able to consider
–There are large samples of cohorts for three school years (so far), whereas you had to combine cohorts across years to get to the 1,607.
–Your analysis by "spelling months and years" has inherent psychometric limitations. You showed that the distribution could be nudged importantly to the right. But the resulting distributions are a long way from "eliminating illiteracy." In part, this is due to using the "spelling age" metric–a glitch that can now be avoided.
With all of the hoopla about "Evidence-based educational decision making" it boggles my mind that this "rich and varied resource" has barely been examined. But my mind boggles easily these days.
Once again, thanks Dick for taking the trouble to reply and be of such help.
I'll take your advice and see what I can do to get access to the two databases, or, more like, find someone who can gain access.
Watch this space!