The concept behind this theory is that Panda is so slow because Google needs to recrawl a page before Panda is applied to it.
Panda 4.2 has been moving veeeerrry sloooowwwly and that has a lot of webmasters scratching their heads and asking, "What gives?" It's never been quite this slow before. Over on SE Roundtable, there's a fascinating conversation going on between Barry Schwartz and Mary Haynes. Schwartz presents the recrawl rate theory based on commentary from a WebmasterWorld thread and Haynes has an interesting series of counterpoints.
This would make a huge amount of sense. Google crawls the page after 4.2 was released and then, only then, does the Panda get unleashed (in a good or bad way) on that specific page. Which would make a ton of sense more on how this slow roll out is happening. Maybe it also explains why, why the technical reason Google is doing this. Maybe it is more efficient for Google to roll it out based on their crawl behavior?
This doesn't explain why so many sites that are awaiting recovery have seen zero improvement with this latest refresh. Given that the refresh started in mid July, after three months of crawling, I'd expect that all of those sites would have been crawled several times over...Either they're not going to recover, or, what is a more hopeful answer is that the refresh is slowly reaching different sites at different times. We are starting to see some recoveries now, but I'd still say that 90% of sites that are awaiting recovery are still waiting to see any improvement at all.
This is just a rumor and a theory, nothing more so far, but since we're all twiddling our thumbs and waiting for Panda to arrive - might as well have some fun, eh? What do you think - bollocks or plausible?