The fact that protein machines use energy to undergo conformational rearrangements, and that these “moving parts” perform functional work, places them squarely in the realm of machinery – except on a scale so tiny, their operations are only now coming to light.1. Valeria V�squez and Eduardo Perozo, “Structural Biology: A channel with a twist,” Nature 461, 47-49 (3 September 2009) | doi:10.1038/461047a.2. Liu, Gandhi and Rees, “Structure of a tetrameric MscL in an expanded intermediate state,” Nature 461, 120-124 (3 September 2009) | doi:10.1038/nature08277.3. Cook, Fukuhara, Jinek and Conti, “Structures of the tRNA export factor in the nuclear and cytosolic states,” Nature 461, 60-65 (3 September 2009) | doi:10.1038/nature08394.4. Guydosh and Block, “Direct observation of the binding state of the kinesin head to the microtubule,” Nature 461, 125-128 (3 September 2009) | doi:10.1038/nature08259.Molecular machines – the very concept is only a couple of decades old. This is phenomenal. It is marvelous and wonderful beyond description. You can almost sense the astonishment and excitement of these biophysicists uncovering these tiny wonders in the cell. Who could have imagined this is how life works? Think of the centuries, the millennia, of people going about their business, oblivious to the fact that at scales too tiny to imagine a whole factory of automated molecular machines was keeping them alive. The few thinkers after the discovery of cells by Robert Hooke envisioned little people (homunculi) doing some of it, but our instruments were too coarse to elucidate the workings inside till recently – till our generation. Next to the discovery of DNA and the genetic code this must be considered one of the most important discoveries in the history of science. If Antony van Leeuwenhoek was astonished at what he saw with his primitive hand lens, how much more should we be flabbergasted at what is coming into focus, now that we can discern the activity of individual molecules? The Darwinists are strangely silent about all this. In our 9 years of reporting, very few papers on molecular machines have even mentioned evolution (e.g., 10/02/2001, 01/09/2002), and those that did usually just assumed it rather than tried to seriously explain how the most primitive life-forms could have became endowed with factories of mechanical filters, scribes, taxicabs and walking robots by chance (e.g., 09/16/2000, 08/24/2009 08/26/2005). Search on “molecular machines” in the search bar above and check. There are lots of examples. It’s time to cast off that antiquated 19th-century mindset that tried to imagine all this from the bottom up. Let us regard as silly the tales of miracles of “emergence” occurring mindlessly in “a chance Motion of I don’t know what little Particles,” as Christiaan Huygens, our Scientist of the Month, quipped. Paley is back with a vengeance. The contrivances of nature are more wonderful than he or any other philosopher or scientist could have imagined. It’s a Designed world after all. Rejoice, give thanks and sing!(Visited 10 times, 1 visits today)FacebookTwitterPinterestSave分享0 Scientific papers continue to exhibit the exquisite mechanisms in the cell for handling all kinds of situations, through the operation of molecular machines. Here are a few recent examples from this week’s issue of Nature (Sept 3, 2009).Molecular sieve: What happens when a cell gets bloated? Too much water entering a cell can increase the pressure against the membrane, “potentially compromising the integrity of the cell,” said Valeria V�squez and Eduardo Perozo in Nature this week.1 They described findings about a molecular sieve named MscL by Liu et al in the same issue of Nature.2 MscL in bacteria is made up of multiple protein parts that form a pore in the cell membrane. The research team from Caltech and Howard Hughes Medical Institute found that the components flatten out and pivot, opening up the pore like an iris when sufficient pressure is applied. This is called “mechanosensation” because it operates automatically via mechanical pressure. “These channels act as ‘emergency relief valves,’ protecting bacteria from lysis [disruption] upon acute osmotic down-shock,” the authors said. “MscL has a complex gating behaviour; it exhibits several intermediates between the closed and open states, including one putative non-conductive expanded state and at least three sub-conducting states.” The team’s contribution was to image one of the intermediate states. The research paper did not mention evolution. V�squez and Perozo, however, said, “free-living cells have evolved a variety of mechanisms to deal with sudden variations in the physicochemical properties of their surroundings,” and later said, “Most prokaryotes (bacteria and archaea) have therefore evolved a ‘pressure-release valve’ mechanism in which changes in membrane tension open up channels to form large, aqueous pores in the membrane,” but they did not explain how evolution could have accomplished this. They made it sound like the bacteria purposely employed evolution (whatever they meant by the term) to solve a real problem. They did not explain how bacteria got through osmotic down-shock without the pressure release valves.Molecular taxicab: Transfer RNAs (tRNA) are made in the nucleus but need to commute to work outside, in the cytoplasm, where the ribosomes are. They are small enough to barely squeeze through the nuclear pore complex (NPC) – the complicated gates in the nuclear membrane that control traffic in and out – but they don’t avail themselves of that freedom, lest their exposed parts interact with the authentication mechanisms of the NPC. Instead, they hale a taxicab to escort them through. That taxicab, or “tRNA export factor,” is called Xpot. Xpot is a complex molecule that fits around the exposed parts of the tRNA. It literally “wraps around” the tRNA, undergoing conformational changes as it clamps on. Imagine a taxicab wrapping around you, and you get the picture. Xpot is general enough to fit all 20 kinds of tRNAs, but specific enough to protect their delicate active sites. It is also able to recognize and reject tRNAs that are immature. Only tRNAs that have passed a processing exam are allowed in the taxi. The authors of a paper in Nature who studied Xpot said, “Xpot undergoes a large conformational change on binding cargo, wrapping around the tRNA and, in particular, binding to the tRNA 5′ and 3′ ends. The binding mode explains how Xpot can recognize all mature tRNAs in the cell and yet distinguish them from those that have not been properly processed, thus coupling tRNA export to quality control.”3 As an additional control, Xpot does not interact with tRNA except in the presence of another factor in the nucleus called RanGTP. After safe transport through the nuclear pore complex, another factor in the cytoplasm unlocks the RanGTP, allowing the Xpot taxicab to unwrap from the tRNA. The tRNA then heads off to the ribosome to fulfill its work shift as a scribe, translating the genetic code into the protein code. “Transfer RNAs are among the most ubiquitous molecules in cells,” they said, “central to decoding information from messenger RNAs on translating ribosomes.” The authors of the paper did not discuss how Xpot originated, but six times they said that parts of Xpot are either “conserved,” “evolutionarily conserved” or “highly conserved” (i.e., unevolved) throughout the living world.Molecular sherpa: Kinesin is among the most fascinating molecular machines in the cell, because it literally “walks” hand-over-hand on microtubule trails, carrying cargo. In doing this, it converts chemical energy from ATP into mechanical work. Writing in this week’s Nature,4 Guydosh and Block of Stanford described direct observation of the binding state of the hands (called heads) of kinesin to the microtubule. They found that it walks tiptoe on the tightrope: “Here we report the development of a single-molecule assay that can directly report head binding in a walking kinesin molecule, and show that only a single head is bound to the microtubule between steps at low ATP concentrations.” The rear head has to unbind before the forward head can bind. This keeps the kinesin from getting stuck with both feet (heads) on the tightrope. If you can stand some jargon, here is what they said about the complexities of how this works:The inability of one head to bind the microtubule offers a natural explanation for the observation that the microtubule-stimulated release of ADP is inhibited until the microtubule-attached head binds ATP and docks its neck linker (Fig. 4, state 2). Strain produced by an unfavourable neck-linker conformation also explains the observation that ATP does not bind prematurely to the front, nucleotide-free head of a 2-HB kinesin molecule (Fig. 4, state 3). Any tight binding of ATP is disfavoured because it is coupled to neck-linker docking and, therefore, to the generation of a strained configuration in which both neck linkers are docked (Fig. 4, S3). We anticipate that the single-molecule techniques presented here will be applicable to the study of dynamic properties of other motors and macromolecules that undergo analogous conformational rearrangements.
4 November 2011Springbok flank Schalk Burger was named South African Rugby Player of the Year for a second time, while Currie Cup champions the MTN Golden Lions picked up a hat-trick of awards, at the 2011 SA Rugby Awards.The awards ceremony took place at Johannesburg’s Gold Reef City on Thursday evening, and the golden locks of Burger matched the award he received for his performances for the Stormers, who made it into the Super Rugby playoffs, and for the Springboks at the Rugby World Cup.Two-time winnerBurger previously won the award in 2004 and becomes only the fifth player to win it more than once, following in the footsteps of Naas Botha (who won it four times), Uli Schmidt, Bryan Habana and Fourie du Preez (who all won it twice).He was voted as Player of the Year by South Africa’s accredited rugby media, ahead of Bismarck du Plessis, Francois Hougaard, Pat Lambie and Victor Matfield.Burger’s award was the finale of a star-studded night during which 16 awards were won for rugby excellence across all age groups and competitions in 16 categories.Golden Lions’ successThe Golden Lions scooped three awards after beating The Sharks in the Absa Currie Cup final last weekend to win their first title since 1999. They were named Absa Team of the Year; their coach, John Mitchell, the Absa Coach of the Year; and their captain, Josh Strauss, Absa Currie Cup Premier Division Player of the Year.Springbok fullback Pat Lambie (21) was named the Absa Young Player of the Year, building on the fine impression he made in his debut season of 2010 by excelling at the Rugby World Cup in New Zealand.SA under-20 captain Arno Botha was named the SA Under-20 Player of the Year after some sterling performances at the Junior World Championships in Italy earlier this year.Easy decisionCecil Afrika, who was recently named the World Sevens Player of the Year after he topped both the top points and try scoring lists during the 2010/11 HSBC Sevens World Series, walked away with the award for Springbok Sevens Player of the Year in one of the easiest decisions of the night.The South African Rugby Players Association (Sarpa) Players’ Player of the Year Award, voted on by the players themselves, went to Springbok hooker Bismarck du Plessis.The Supersport Try of the Year was awarded to Springbok Sevens flyer Sibusiso Sithole for his tournament-winning score in the final of the Edinburgh Sevens against Australia.TributeThe South African Rugby Union (Saru) also paid tribute to two retiring Springboks legends: John Smit, who has already left South African to take up a contract with English club Saracens, and Victor Matfield, both played their last tests for the Boks in the quarter-finals at the Rugby World Cup.There were also rewards for Toyota Cheetahs’ scrumhalf Sarel Pretorius (Vodacom Super Rugby Player of the Year), who will represent Australia’s Waratahs next season, Regent Boland Cavaliers flyhalf Elgar Watts (Absa Currie Cup First Division Player of the Year) and DHL Western Province flyhalf Lionel Cronje (Vodacom Cup Player of the Year).Craig Joubert, who refereed the Rugby World Cup final, and became the first referee to take charge of matches in every single round of a singler World Cup competition unsurprisingly won the Marriot Referee Award.Significant and satisfyingOregan Hoskins, President of Saru, said that the 2011 season was one of the more significant and satisfying seasons in recent years.“On field success of our national and provincial teams will always be of paramount importance, but I believe we have enjoyed a special year in the history of our rugby, perhaps a watershed year,” said Hoskins.“That’s because our Springbok team has been embraced by all South Africans in a way I have never experienced before. 2011 was the year that fans from all backgrounds stood behind the Boks from the moment the squad was announced. In the past, that kind of affection was dependent on the delivery of a trophy.”AWARD WINNERSSaru Rugby Player of the YearSchalk BurgerAbsa Young Player of the YearPat LambieAbsa Team of the YearMTN Golden LionsAbsa Coach of the YearJohn Mitchell (MTN Golden Lions)Vodacom Super Rugby Player of the YearSarel Pretorius (Toyota Cheetahs)Absa Currie Cup Premier Division Player of the YearJosh Strauss (MTN Golden Lions)Absa Currie Cup First Division Player of the YearElgar Watts (Regent Boland Cavaliers)Vodacom Cup Player of the YearLionel Cronje (DHL WP)Sarpa Players’ Player of the YearBismarck du PlessisSpringbok Sevens Player of the YearCecil AfrikaSupersport Try of the YearSibusiso Sithole (Springbok Sevens vs Australia in the final of the Edinburgh Sevens)SA Under-20 Player of the YearArno BothaCoca-Cola Craven Week Player of the TournamentJan Serfontein (Free State)Marriott Referee AwardCraig JoubertWomen’s Achiever of the YearCebisa KulaSaru National Club Championship Player of the TournamentJustin Wheeler (University of Johannesburg)SAinfo reporterWould you like to use this article in your publication or on your website? See: Using SAinfo material
Share Facebook Twitter Google + LinkedIn Pinterest By Matt ReeseSo Nathan Brown decided he would try to plant some soybeans — about 3 acres worth — on March 24 to see how they’d do. While the stand won’t make it as a whole, Brown did learn some lessons from the experiment.The seeds germinated well, but struggled to consistently emerge from the cold, wet soils this spring.“The beans planted March 24 were planted at 2 inches deep. I thought that would keep them in the ground longer to avoid frost, which it did. But, being 2 inches deep, there was not enough warmth to actually get them up and out of the ground once they germinated. Next year I’ll hopefully try planting early again in another plot and I’ll shallow up my planting,” he said. “I learned a lot from the experiment.”Brown shared about his experiences with the March 24 soybeans on the Ohio Soil Health and Cover Crops Facebook page. He pulled up some of the soybeans from the spotty stand and was impressed with the nodulation that had already taken place. He posted pictures and a description on the Facebook page for others to learn from his experiment as well.“These all were pulled up by hand and not dug and already the roots are full of nodules. Could early planting work? Maybe, maybe not but we got some out of the ground so I believe it is possible,” Brown said on the Ohio Soil Health and Cover Crops Facebook page. “The possibilities could be big! We will be ready next year! Don’t be scared to try the outlandish, you will never grow if you stay inside a bubble your whole life!”The social media effort was headed up by Brown and provides a forum for all things related to soil health, no-till and cover crops in Ohio. Various experts (and others in all stages of the learning process) weigh in with their experiences, successes and failures in the fields. The page was created this spring for farmers to learn and share with others from around the state about a wide array of topics that influence soil health in Ohio. Brown encourages other farmers interested in soil health to visit Facebook and join the group.
Related Posts Tags:#cloud#security The issues of cloud/SaaS security have been on my mind since the late 90s when I was working on my first global intranet/extranet project. Personally, I’ve never been terribly concerned with the more lower-level technical details of network architecture, transport protocols or with tedious policy writing; you need good security experts to cover these areas properly. I’ve always been drawn to the more forgivably human downsides to the whole SaaS/Cloud concept like this one: How on earth do you prevent password sharing? I’ve been thinking that the solution may be so obvious, so ubiquitous, that it’s just difficult to see past our own fears: What if we could improve the security of our cloud-based applications by handing over our authentication processes to the social media networks?Steve Henty is an experienced IT Project Manager who has specialized in Web technologies since 1996. He lives in Madrid and is currently working for Toshiba. He can be contacted at [email protected] or on Twitter or at http://www.henty.es.The ProblemYou see them everywhere. Those claims that XYZ Web application is 100% secure because it’s as secure as banking online and uses SSL and allows IP restrictions and uses LDAP authentication and etc. All these security features are useful but at the end of the day we’re still faced with the daunting challenge of convincing users not to give out their passwords either intentionally (e.g. by lending to “friends”) or unintentionally (e.g. written notes lying around). As soon as one person in your organisation has divulged his or her account details the entire system is compromised and all the company information is open to whoever gets hold of the password. What’s worse, there’s no real way of know if or when this has happened – even our careless user may be unaware that someone else is using the same account. I sometimes see references to this problem on the Web but I haven’t seen any serious solutions. It tends to get passed off as irrelevant, as if password security has nothing to do with cloud security. But unfortunately its inevitable effect on adoption blows a big hole in the whole cloud computing concept. So currently the industry doesn’t want to talk about this elephant in the room because it might affect uptake, and consequently businesses are not getting the full picture.With 50% of companies in the UK currently thinking about moving to the cloud this year, we’re going to see an increase in security concerns – that is, as soon as these companies realise they’ve had the wool pulled over their eyes. When Strong Isn’t StrongLet’s take a quick look at the ways cloud computing services are currently attempting to deal with the issues of cloud security and examine how they might fall short.SSL: A common claim you hear is that XYZ app is as secure as online banking because it uses the same technology: SSL. While I agree that it’s important to encrypt communication, this claim is borderline fraudulent. People are inherently motivated to keep their online banking account details a secret whereas an employee may actually become motivated to do the complete opposite.IP Restrictions: Some apps let you restrict access from certain IP addresses. This might work if you’re prepared to forego the benefits of device independence, but to my mind this is one of the great advantages of working in the cloud.LDAP Integration: Some apps allow integration with directories such as the Active Directory. This is great – one less directory to manage. However, in addition to the network security headaches this can bring it doesn’t guarantee that the person using the password is actually the person you hope it is.Enterprise Security: Two-factor authentication with security tokens or a sophisticated PKI implementation work nicely if you have the time and resources. If you have any high-profile users you’ll be wanting this level of security to avoid breaches like the one Twitter faced a few months ago. However, these solutions can be so expensive and time-consuming that even a large enterprise would baulk at the cost of rolling this out to 100% of employees. So for most companies it’s just not a feasible option.On The Radar: In the not-so-distant future we may be using mobile phone SIMs, Electronic IDs or government-issued browser certificates to authenticate. But how about right now? Is there anything else we can be doing now, in 2010, to improve the security of our cloud-based apps?The Solution: Social Media Integration?Solutions often seem counterintuitive at first. What if we could increase security by giving up some control? What if we were to relax our grip a little on the whole identity management and authentication process and let the employee share some of the responsibility?Most employees have a personal online identity already, a personal brand that they are inherently motivated to protect. The have a personal email addresses, blogs, Facebook accounts, LinkedIn accounts – public or semi-public profiles all over the Web. What if we were to allow these social media accounts to connect to our company cloud-based apps and perform the authentication process?This could mean, for example, that an employee would be able to access a company CRM application simply by logging into a Facebook or LinkedIn account. A breach in our application’s security would then only come at the expense of a breach in the security of a user’s personal account. This way the responsibility for maintaining security would be would be shared.Now that our users have a vested interest preventing unauthorised access to company data they might actually start taking to heart all the guidelines about strong passwords we’ve been banging on about for so long.It could also be argued that by spreading the accounts over a number of different social media sites, thereby decentralising the authentication process, potential hackers might be deterred from casual password guessing and brute force attacks.Okay, the idea needs to be developed further and it’s far from perfect. There are certainly issues that need to be addressed regarding adoption, privacy and appropriate checks and controls. However, the technology already exists in the form of APIs, Facebook Connect, OAuth and OpenID and others, and the big social media players now have the critical mass of users you’d need in order to pull off something like this. Even attitudes towards privacy appear to be relaxing, so the timing could be perfect. If my assumptions are right, the missing piece in the cloud security puzzle might be right under our noses, and we’d be able to alleviate some of the fear of cloud computing simply by relinquishing some of our need to control.I’d be very interested in your views on the subject – especially if you know of anyone has already had some experience of implementing this in a production environment or has decided against doing it.Photo credit: Joshua Davis. guest author 1 A Web Developer’s New Best Friend is the AI Wai… Why Tech Companies Need Simpler Terms of Servic… Top Reasons to Go With Managed WordPress Hosting 8 Best WordPress Hosting Solutions on the Market
Four personnel of a residential higher secondary institution, including a lecturer and three hostel staff, were arrested in Odisha’s Berhampur on charges of torture of students. According to Santosini Oram, Inspector in-charge of the Berhampur Sadar police station, the misdeeds of the arrested persons came to light on Saturday. As part of an investigation into a case related to disappearance of a plus 2 student of the same institution, a police team had reached its hostel. According to police sources, the student went missing from the hostel on September 26. The institution is located in the Bhabinipur area of Berhampur and around 200 students reside in its hostel..Deep wounds During interrogation of hostel inmates, four students complained that they had been ruthlessly beaten up by a lecturer and three staff of the hostel. The students also had deep wounds. The lecturer and the hostel staff had resorted to physical torture of the students alleging that they had information about their missing classmate.Later, parents of one of the injured students filed an FIR that led to arrest of the lecturer and the three hostel staff of the institution.
VICTORIA – British Columbia’s public safety minister says he hopes the province’s wildfire situation is not the new normal but the issue of climate change and its impact on forests must be taken into account.“Obviously we know that climate is changing, we know that the fire season is starting earlier … we have been doing planning earlier, getting aircraft earlier, but it is a situation that we have to take seriously, the issue of climate change,” Mike Farnworth said Wednesday shortly after the government declared a provincewide state of emergency in response to hundreds of wildfires.It’s the second time in as many years that a state of emergency has been declared during the wildfire season and the fourth time in just over two decades. Provincial states of emergency were also declared in 1996 and 2003.The BC Wildfire Service said 559 fires were burning Wednesday in all corners of the province, with 31 new starts since Tuesday. Just over 1,800 blazes have been recorded since the wildfire season began April 1.The latest state of emergency will remain in effect for 14 days but can be extended or rescinded as necessary, Farnworth said, adding it ensures federal, provincial and local resources can be delivered in a co-ordinated manner.In northwestern B.C., a 333-square kilometre fire has destroyed more than 40 homes and properties in and around Telegraph Creek, said Forests Minister Doug Donaldson. Nearly a dozen agencies including firefighters from local First Nations and crews from outside the province were working to save homes in the community, he said.“This state of emergency improves our ability to increase that co-ordination as we see risk increasing in other communities,” he said.Farnworth, who is also the province’s solicitor general, said the emergency was declared based on recommendations from the BC Wildfire Service and emergency management officials.“As wildfire activity is expected to increase, this is a progressive step in our wildfire response to make sure British Columbia has access to any and all resources necessary,” he added.Kevin Skrepnek of the Wildfire Service said more than 1,500 properties were on evacuation order at midday Wednesday and at least 10,000 were on an alert, with residents advised to be ready to leave on short notice.“Certainly, given the number of fires we have going on right now, given the fire activity we are seeing out there, and given the fact that we really see no relief from the weather, there’s definitely the potential this season is going to get worse before it gets better,” he said.The province is waiting for the arrival of 200 Armed Forces’ members. Skrepnek said most of them would likely be sent to the Okanagan to help with wildfire mop-up.The RCMP said Wednesday it would send officers and equipment to assist detachments in central, northern and southern B.C. that have been most affected by fires, which could include vehicles, supplies and additional officers to help at check points or provide relief to local detachments.By this time last year, hundreds of homes been lost to wildfires and tens of thousands of people had been displaced. The human cost has not been as high this year, but the total number of fires is greater, said Skrepnek.The most severe losses this year have been in the Telegraph Creek area, which Donaldson visited on Tuesday.“In the town site we saw the random nature of forest fires … there’d be a house standing and three doors down there’d be a house totally destroyed, all that was left was the foundation and some twisted and melted metal,” he said.Crews were protecting heritage buildings in the old part of town and setting up sprinklers on the roofs of other homes, he said.Donaldson described the situation as volatile, adding “a change in wind direction could change everything.”Environment Canada issued air quality advisories for much of B.C., all of Alberta, and parts of Saskatchewan and Manitoba, as smoke from the fires drifts east. It advised children, the elderly and those with heart and lung conditions to limit their exposure.The dense smoke also made it more difficult to find fires that were sparked by lightning last weekend, said Skrepnek.“We can almost guarantee that there are fires out there that haven’t been detected yet,” he said, adding that rain is the only solution to the increasing risk but that isn’t in the forecast.“Rain is going to be absolutely critical. That is what we need to see and not just a small, quick event. We need to see a widespread rain across the entire province to alleviate the situation.”— By Beth Leighton in Vancouver
In football, there are constant power struggles, both on and off the field: players battling players, offenses battling defenses, the passing game battling the running game, coaches battling coaches, and new ways of thinking battling old ways of thinking. And then there are kickers. Battling no one but themselves and the goalposts, they come on the field in moments most mundane and most decisive. They take all the blame when they fail, and little of the credit when they succeed. Year in and year out, just a little bit at a time, they get better. And better. And better. Until the game is completely different, and no one even noticed that kickers were one of the main reasons why.If you’ve been reading my NFL column Skeptical Football this season, you may have noticed that I write a lot about kickers. This interest has been building for a few years as I’ve watched field goals drained from long range at an ever-increasing rate, culminating in 2013, when NFL kickers made more than 67 percent of the kicks they took from 50-plus yards, giving them a record 96 such makes. There has been a lot of speculation about how kickers suddenly became so good at the long kick, ranging from performance-enhancing drugs (there have been a few possible cases) to the kickers’ special “k-balls” to more kick-friendly stadiums.So prior to the 2014 season, I set out to try to see how recently this improvement had taken place, whether it had been gradual or sudden, and whether it was specific to very long kicks or reflected improvement in kicking accuracy as a whole.What I found fundamentally changed my understanding of the game of football.1And possibly offered insight into how competitive sports can conceal remarkable changes in human capability.The complete(ish) history of NFL kickingPro Football Reference has kicking data broken down by categories (0-19 yards, 20-29, 30-39, 40-59 and 50+ yards) back to 1961. With this we can see how field goal percentage has changed through the years for each range of distances:It doesn’t matter the distance; kicking has been on a steady upward climb. If we look back even further, we can see indicators that kicking has been on a similar trajectory for the entire history of the league.The oldest data that Pro Football Reference has available is from 1932, when the eight teams in the NFL made just six field goals (it’s unknown how many they attempted). That year, kickers missed 37 of 113 extra-point attempts, for a conversion rate of 67.3 percent. The following year, the league moved the goal posts up to the front of the end zone — which led to a whopping 36 made field goals, and a skyrocketing extra-point conversion rate of 79.3 percent. With the uprights at the front of the end zone, kickers missed only 30 of 145 extra points.For comparison, those 30 missed extra-point attempts (all with the goalposts at the front of the end zone) are more than the league’s 28 missed extra-point attempts (all coming from 10 yards further out) from 2011 to 2014 — on 4,939 attempts.In 1938-39, the first year we know the number of regular field goals attempted, NFL kickers made 93 of 235 field-goal tries (39.6 percent) to go with 347 of 422 extra points (82.2 percent). In the ’40s, teams made 40.0 percent of their field goal tries (we don’t know what distances they attempted) and 91.3 percent of their XPs. In the ’50s, those numbers rose to 48.2 percent of all field goals and 94.8 percent of XPs. The ’60s must have seemed like a golden era: Kickers made 56 percent of all field goals (breaking the 50 percent barrier for the first time) and 96.8 percent of their extra points.For comparison, since 2010, NFL kickers have made 61.9 percent of their field goal attempts — from more than 50 yards.In the 1960s, we start to get data on field goal attempts broken down by distance, allowing for the more complete picture above. In 1972, the NFL narrowed the hash marks from 18.5 yards from 40, which improved field goal percentages overall by reducing the number of attempts taken from awkward angles. And then in 1974, the league moved the goal posts to the back of the end zone — but as kick distances are recorded relative to the posts, the main effect of this move was a small (and temporary) decline in the extra-point conversion rate (which you can see in the top line of the chart above). Then we have data on the kicks’ exact distance, plus field and stadium type, after 1993.2This info is likely out there for older kicks as well, but it wasn’t in my data.So let’s combine everything we know: Extra-point attempts and distances prior to 1961, kicks by category from 1961 to 1993, the kicks’ exact distance after 1993, and the changing placement of goal posts and hash marks. Using this data, we can model the likely success of any kick.With those factors held constant, here’s a look at how good NFL kickers have been relative to their set of kicks in any given year3This is done using a binomial probit regression with all the variables, using “year taken” as a categorical variable (meaning it’s not treated like a number, so 1961, 1962 and 1963 may as well be “Joe,” “Bob” and “Nancy”). This is similar to how SRS determines how strong each team is relative to its competition.:When I showed this chart to a friend of mine who’s a philosophy Ph.D.,4Hi, Nate! he said: “It’s like the Hacker Gods got lazy and just set a constant Kicker Improvement parameter throughout the universe.” The great thing about this is that since the improvement in kicking has been almost perfectly linear, we can treat “year” as just another continuous variable, allowing us to generalize the model to any kick in any situation at any point in NFL history.Applying this year-based model to our kicking distance data, we can see just how predictable the improvement in kicking has actually been:The model may give teams too much credit in the early ’60s — an era for which we have a lot less data — but over the course of NFL history it does extremely well (it also predicts back to 1932, not shown). What’s amazing is that, while the model incorporates things like hashmark location and (more recently) field type, virtually all the work is handled by distance and year alone. Ultimately, it’s an extremely (virtually impossibly) accurate model considering how few variables it relies on.5So how accurate is this thing? To be honest, in all my years of building models, I’ve never seen anything like it. The model misses a typical year/distance group prediction by an average of just 2.5 percent. Note that a majority of those predictions involve only a couple hundred observations — at most. For comparison, the standard deviation for 250 observations of a 75 percent event is 2.7 percent. In other words, the model pretty much couldn’t have done any better even if it knew the exact probability of each kick!While there is possibly a smidge of overfitting (there usually is), the risk here is lower than usual, since the vast majority of each prediction is driven solely by year and distance. Here’s the regression output:I wish I could take credit for this, but it really just fell into place. Nerds, perk up: The z-value on “season” is 46.2! If every predictive relationship I looked for were that easy to find, life would be sweet.This isn’t just trivia, it has real-world implications, from tactical (how should you manage the clock knowing your opponent needs only moderate yardage to get into field goal range?) to organizational (maybe a good kicker is worth more than league minimum). And then there’s the big one.Fourth downIf you’re reading this site, there’s a good chance you scream at your television a lot when coaches sheepishly kick or punt instead of going for it on fourth down. This is particularly true in the “dead zone” between roughly the 25- and 40-yard lines, where punts accomplish little and field goals are supposedly too long to be good gambles.I’ve been a card-carrying member of Team Go-For-It since the ’90s. And we were right, back then. With ’90s-quality kickers, settling for field goals in the dead zone was practically criminal. As of 10 years ago — around when these should-we-go-for-it models rose to prominence — we were still right. But a lot has changed in 10 years. Field-goal kicking is now good enough that many previous calculations are outdated. Here’s a comparison between a field-goal kicking curve from 2004 vs. 2014:There’s no one universally agreed-upon system for when you should go for it on fourth down. But a very popular one is The New York Times’ 4th Down Bot, which is powered by models built by Brian Burke — founder of Advanced Football Analytics and a pioneer in the quantitative analysis of football. It calculates the expected value (either in points or win percentages) for every fourth-down play in the NFL, and tweets live results during games. Its 19,000-plus followers are treated to the bot’s particular emphasis on the many, many times coaches fail to go for it on fourth down when they should.A very helpful feature of the 4th Down Bot is that its game logs break down each fourth-down decision into its component parts. This means that we can see exactly what assumptions the bot is making about the success rate of each kick. Comparing those to my model, it looks to me like the bot’s kickers are approximately 2004-quality. (I asked Burke about this, and he agrees that the bot is probably at least a few years behind,6I don’t blame Burke or others for not updating their models based on the last few years. It’s good to be prudent and not assume that temporary shifts one way or the other will hold. Normally it is better to go with the weight of history rather than with recent trends. But in this case, the recent trends are backed by the weight of history. and says that its kicking assumptions are based on a fitted model of the most recent eight years of kicking data.7Here’s his full statement: “The bot is about 3-4 years behind the trends in FG accuracy, which have been improving at longer distances. It uses a kicking model fitted to the average of the recent 8-year period of data. AFA’s more advanced model for team clients is on the current ‘frontier’ of kick probabilities, and can be tuned for specific variables like kicker range, conditions, etc. Please keep in mind the bot is intended to be a good first-cut on the analysis and a demonstration of what is possible with real-time analytics. It’s not intended as the final analysis.”)But more importantly, these breakdowns allow us to essentially recalculate the bot’s recommendations given a different set of assumptions. And the improvement in kicking dramatically changes the calculus of whether to go for it on fourth down in the dead zone. The following table compares “Go or No” charts from the 4th Down Bot as it stands right now, versus how it would look with projected 2015 kickers8The exact values in the chart may differ slightly from the reports on the Times’ website because I had to reverse-engineer the bot’s decision-making process. But basically I’m assuming the model gets everything exactly right as far as expected value from various field locations, chances of converting a fourth-down attempt, etc., then recalculating the final expected value comparison using 2015 kickers.:Having better kickers makes a big difference, as you can see from the blue sea on the left versus the red sea on the right. (The 4th Down Bot’s complete “Go or No” table is on the Times’ website.)Getting these fourth-down calls wrong is potentially a big problem for the model. As a test case, I tried applying the 4th Down Bot’s model to a selection of the most relevant kicks from between 25 and 55 yards in 2013, then looked at what coaches actually did in those scenarios. I graded both against my kicking-adjusted results for 2013. While the updated version still concluded that coaches were too conservative (particularly on fourth-and-short), it found that coaches were (very slightly) making more correct decisions than the 4th Down Bot.The differences were small (coaches beat the bot by only a few points over the entire season), but even being just as successful as the bot would be a drastic result considering how absolutely terrible coaches’ go-for-it strategy has been for decades. In other words, maybe it’s not that NFL coaches were wrong, they were just ahead of their time!Time-traveling kickersHaving such an accurate model also allows us to see the overall impact kicking improvement has had on football. For example, we can calculate how kickers from different eras would have performed on a common set of attempts. In the following chart, we can see how many more or fewer points per game the typical team would have scored if kickers from a different era had taken its kicks (the red line is the actual points per game from field goals that year):The last time kickers were as big a part of the game as they are today, the league had to move the posts back! Since the rule change, the amount of scoring from field goals has increased by more than 2 points per game. A small part of the overall increase (the overall movement of the red line) is a result of taking more field goals, but most of it comes from the improvement in accuracy alone (the width of the “ribbon”).How does this compare to broader scoring trends? As a baseline for comparison, I’ve taken the average points scored in every NFL game since 1961, and then seen how much league scoring deviated from that at any given point in time (the “scoring anomaly”). Then I looked at how much of that anomaly was a result of kicking accuracy.9The scoring deviation on this chart is calculated relative to the average game over the period. The kicking accuracy is relative to the median kicker of the period.:Amid wild fluctuations in scoring, kicking has remained a steady, driving force.For all the talk of West Coast offenses, the invention of the pro formation, the wildcat, 5-wide sets, the rise of the pass-catching tight-end, Bill Walsh, the Greatest Show On Turf, and the general recognition that passing, passing and more passing is the best way to score in football, half the improvement in scoring in the past 50-plus years of NFL history has come solely from field-goal kickers kicking more accurately.10Side note, I’ve also looked at whether kicking improvement has been a result of kickers who are new to the league being better than older kickers, or of older kickers getting better themselves. The answer is both.The past half-century has seen an era of defensive innovation — running roughly from the mid-’60s to the mid-’70s — a chaotic scoring epoch with wild swings until the early ’90s, and then an era of offensive improvement. But the era of kickers is forever.Reuben Fischer-Baum contributed graphics.CORRECTION (Jan. 28, 2:22 p.m.): An earlier version of this article incorrectly gave the distances from which extra-point kicks were taken in 1933 and in recent years. Actual extra-point distances aren’t recorded.