Jump to content
  • Member Statistics

    17,502
    Total Members
    7,904
    Most Online
    Weathernoob335
    Newest Member
    Weathernoob335
    Joined

Climate Change Banter


Jonger
 Share

Recommended Posts

Because the satellites have long-term biases. Resolution is not a bias, it is simply a lack of precision which over time averages out to zero.

 

I was referring more to the recent rash of questions on here: "What's going on with the satellite temps not mirroring the surface data??"

 

Short term view vs. long term. The differences in the long term trends between satellite data sets and surface are pretty minor.

Link to comment
Share on other sites

I was referring more to the recent rash of questions on here: "What's going on with the satellite temps not mirroring the surface data??"

 

Short term view vs. long term. The differences in the long term trends between satellite data sets and surface are pretty minor.

 

The long-term trend between RSS/UAH vs other satellite (STAR and several other peer-reviewed criticisms of UAH/RSS), RATPAC, and surface data is more than noise and is scientifically significant. 

 

The RSS trend since 1979 is .13C/decade. This would imply a surface trend of around .11C/decade. The measured surface trend is .17C/decade. This is 50% more than implied by RSS. This is very significant scientifically and has serious implications for climate sensitivity.

 

The balance of evidence suggests UAH/RSS is in the minority, and is more prone to error given how susceptible the results are to arbitrary choices in methodology. 

Link to comment
Share on other sites

The long-term trend between RSS/UAH vs other satellite (STAR and several other peer-reviewed criticisms of UAH/RSS), RATPAC, and surface data is more than noise and is scientifically significant.

STAR is TMT only. There are only three viable datasets that measure in the TLT, and two happen to agree very well with one another. The dataset that happens to diverge (RATPAC) requires substantially more quality control and spatial extrapolation.

The RSS trend since 1979 is .13C/decade. This would imply a surface trend of around .11C/decade. The measured surface trend is .17C/decade. This is 50% more than implied by RSS. This is very significant scientifically and has serious implications for climate sensitivity.

:huh:

Uh, what? This is not necessarily true at all, particularly on a sub-centennial resolution.

A slight change in cloud height or relative latent heat release w/ time can invert the surface-TLT relationship, and visa versa.

The balance of evidence suggests UAH/RSS is in the minority, and is more prone to error given how susceptible the results are to arbitrary choices in methodology.

You mean the "evidence" that you're inventing on the fly? Sure.

Link to comment
Share on other sites

The long-term trend between RSS/UAH vs other satellite (STAR and several other peer-reviewed criticisms of UAH/RSS), RATPAC, and surface data is more than noise and is scientifically significant. 

 

The RSS trend since 1979 is .13C/decade. This would imply a surface trend of around .11C/decade. The measured surface trend is .17C/decade. This is 50% more than implied by RSS. This is very significant scientifically and has serious implications for climate sensitivity.

 

The balance of evidence suggests UAH/RSS is in the minority, and is more prone to error given how susceptible the results are to arbitrary choices in methodology. 

 

I believe UAH is .15C/decade, compared to the .17C/decade for surface trends. That's within the realm of statistical noise, and not actually scientifically significant...it's been explained in more detail many times on here.

 

Also, I don't believe your .11C/decade number is right for "implied" surface trend.

 

Not sure why you grouped UAH/RSS together, yet only cited RSS' trend. Some might think that demonstrates bias...

Link to comment
Share on other sites

 


Uh, what? This is not necessarily true at all, particularly on a sub-centennial resolution.

A slight change in cloud height or relative latent heat release w/ time can invert the surface-TLT relationship, and visa versa.

 

 

 

If the TLT trend is .13C/decade, it would imply one of either two things 
 
1) The surface trend is .11C/decade (following the predicted surface-TLT relationship)
 
or
 
2) The predicted surface-TLT relationship is wrong, the water feedback is non-existent or negative instead of very positive as expected, and climate sensitivity is much less than expected.
 
In both cases, climate sensitivity is much less than expected. 
 
A TLT trend of .13C/decade implies low climate sensitivity no matter how you look at it.
 
 

STAR is TMT only. There are only three viable datasets that measure in the TLT, and two happen to agree very well with one another. The dataset that happens to diverge (RATPAC) requires substantially more quality control and spatial extrapolation.

 

 

This is false. There are many sources that use MSU data to measure TLT. UAH and RSS both require huge amounts of quality control and adjustment so I am not sure how you can say they require quantitatively less than RATPAC. Nor would I say that having more or less quality control is necessarily a bad thing.

Link to comment
Share on other sites

I believe UAH is .15C/decade, compared to the .17C/decade for surface trends. That's within the realm of statistical noise, and not actually scientifically significant...it's been explained in more detail many times on here.

 

Also, I don't believe your .11C/decade number is right for "implied" surface trend.

 

Not sure why you grouped UAH/RSS together, yet only cited RSS' trend. Some might think that demonstrates bias...

 

I believe the 'new' UAH trend is the same as the RSS trend.

 

.11C/decade is correct for an implied surface trend. The TLT should be warming faster if the water vapor feedback is correct.

 

You could say the difference between surface and satellite is statistical noise, but that would be true even if the UAH trend was .07C/decade because the error bars for MSU data are so large (+/- .1C/decade). It's still of scientific interest.

Link to comment
Share on other sites

If the TLT trend is .13C/decade, it would imply one of either two things

1) The surface trend is .11C/decade (following the predicted surface-TLT relationship)

or

2) The predicted surface-TLT relationship is wrong, the water feedback is non-existent or negative instead of very positive as expected, and climate sensitivity is much less than expected.

In both cases, climate sensitivity is much less than expected.

Not necessarily. There's a lot more involved than just water vapor feedback when it comes to TLT warming relative to the surface, at least on a sub-centennial timescale. Changes to the BDC, Hadley Cells, surface wind speeds, or cloud height(s) can significantly amplify or dampen trends in the TLT for periods up to and over 20 years.

A TLT trend of .13C/decade implies low climate sensitivity no matter how you look at it.

Sure, but there's no way you can gauge TLT sensitivity using just 35yrs of data. There's lots of inherent variability underlying the AGW signal in the TLT, more so than at the surface. Eventually the surface and TLT will meet up somewhere.

This is false. There are many sources that use MSU data to measure TLT. UAH and RSS both require huge amounts of quality control and adjustment so I am not sure how you can say they require quantitatively less than RATPAC.

How many of them are operational, mainstream datasets? I can think of four, three of which are applicable in the TLT.

Link to comment
Share on other sites

Even if were true that there was a warm bias due to drift, there were numerous papers pointing out probable cool biases. If they had corrected for that as well the result would have been unchanged, or perhaps even a stronger warming trend. It appears they only changed the method in ways that made it cooler and which were not peer-reviewed.

 

But then again, UAH had run pretty close to RSS in the past. It's strange how they'd been diverging prior to the update as well.

 

Nothing is black/white here.

Link to comment
Share on other sites

I agree. Certainly, RATPAC is consistent with the theory. UAH and RSS appear to be outliers.

 

This is not a fair statement. How many sources do we have actually measuring LT temps? And yet you claim the satellite sources are "outliers". 

 

Being inconsistent with the theory does not make a source an outlier, as Will explained above.

Link to comment
Share on other sites

This is not a fair statement. How many sources do we have actually measuring LT temps? And yet you claim the satellite sources are "outliers".

Being inconsistent with the theory does not make a source an outlier, as Will explained above.

You hit the nail on the head with this.

There are three LT-based datasets, and four that measure into the MT. Attempting to use the surface data to diagnose *potential* error in the LT data is not a physically sound approach.

Link to comment
Share on other sites

This is not a fair statement. How many sources do we have actually measuring LT temps? And yet you claim the satellite sources are "outliers". 

 

Being inconsistent with the theory does not make a source an outlier, as Will explained above.

 

 

Agree. You can't verify the satellite datasets w/ datasets that measure something completely different. Saying that the satellites are outliers is presupposing that the surface datasets provide a more accurate, representative measure of global temperatures. One could make the argument that the satellites provide a more representative measure. The bottom line is that in the longer term, the datasets are very similar. Some seem to be expecting the satellites to consistently be in lock-step with the surface, which makes no meteorological sense when one considers the different domains being measured. It also appears that some are quick to point out the flaws in the satellite measurements, while never broaching the subject of sfc dataset flaws, due to the underlying biases present. Global temperature measurement is not an exact science, and we need to accept that limitations exist with both techniques.

Link to comment
Share on other sites

Agree. You can't verify the satellite datasets w/ datasets that measure something completely different. Saying that the satellites are outliers is presupposing that the surface datasets provide a more accurate, representative measure of global temperatures. One could make the argument that the satellites provide a more representative measure. The bottom line is that in the longer term, the datasets are very similar. Some seem to be expecting the satellites to consistently be in lock-step with the surface, which makes no meteorological sense when one considers the different domains being measured. It also appears that some are quick to point out the flaws in the satellite measurements, while never broaching the subject of sfc dataset flaws, due to the underlying biases present. Global temperature measurement is not an exact science, and we need to accept that limitations exist with both techniques.

 

The answer is in the uncertainty defined by each dataset. And it's a fact that surface datasets have lower uncertainties than their satillite counterparts in their respective domains.  That, coupled with the fact that humans live at the surface, inherently makes the surface datasets more relevant and accurate than the MSU products for measuring global warming, IMO.

 

While there is no magic bullet, since RATPAC accurately emulates the sfc datasets should give at least a bit more confidence that it's probably a relatively accurate dataset aloft.  I'm not sure why that's in dispute.  Remember RSS and UAH use the same equipment from the same sensors to measure temperature.  It's very possible that both are very much in need of orbital drift correction (among others).  RATPAC is an independent source of hundreds of sondes unrelated to the GNCH v3 datasets.

Link to comment
Share on other sites

The answer is in the uncertainty defined by each dataset. And it's a fact that surface datasets have lower uncertainties than their satillite counterparts in their respective domains. That, coupled with the fact that humans live at the surface, inherently makes the surface datasets more relevant and accurate than the MSU products for measuring global warming, IMO.

While there is no magic bullet, since RATPAC accurately emulates the sfc datasets should give at least a bit more confidence that it's probably a relatively accurate dataset aloft. I'm not sure why that's in dispute. Remember RSS and UAH use the same equipment from the same sensors to measure temperature. It's very possible that both are very much in need of orbital drift correction (among others). RATPAC is an independent source of hundreds of sondes unrelated to the GNCH v3 datasets.

RATPAC has poor spatial coverage and a weak resolution even after the data is gridded. Huge areas of open ocean and uninhabited landmass are left blank before the gridding and homogenization process. A vast majority of radiosondes are launched over North America, Europe, and E/SE Asia. That's about 15-20% of the globe.

The surface datasets should even be compared to the TLT data because they're not measuring there and the two domains can diverge significantly on a decadal scale. They have no deterministic value in the TLT.

Link to comment
Share on other sites

RATPAC has poor spatial coverage and a weak resolution even after the data is gridded. Huge areas of open ocean and uninhabited landmass are left blank before the gridding and homogenization process. A vast majority of radiosondes are launched over North America, Europe, and E/SE Asia. That's about 15-20% of the globe.

The surface datasets should even be compared to the TLT data because they're not measuring there and the two domains can diverge significantly on a decadal scale. They have no deterministic value in the TLT.

And yet it very closely matches the 60 year trend of all the surface datasets. Why is that? Resolution is not as large of a factor over a long period. This has been explained many times.

Link to comment
Share on other sites

And yet it very closely matches the 60 year trend of all the surface datasets. Why is that? Resolution is not as large of a factor over a long period. This has been explained many times.

What the surface data is doing is irrelevant.

RATPAC is diverging from all other TLT datasets because it's not measuring over large portions of the globe, particularly over the equatorial & Southern Hemispheric oceans. Much of Africa, Russia, and Eurasia are also blank before gridding and extrapolation takes place.

The TLT data is a 3D, multi-domainal, depth-based representation of the lower tropospheric boundary. The surface data is a 2D representation of the skin temperature alone. Two totally different depictions.

Link to comment
Share on other sites

Furthermore, RATPAC only uses 85 stations, most of which are land-based and in the NH.

http://www1.ncdc.noaa.gov/pub/data/images/Ratpac-datasource.docx

I don't think people realize just how much homogenization needs to be done just to correct for diurnal bias, aerosol contamination, and spatial inefficiency. This is not a rigorous dataset.

Link to comment
Share on other sites

What the surface data is doing is irrelevant.

RATPAC is diverging from all other TLT datasets because it's not measuring over large portions of the globe, particularly over the equatorial & Southern Hemispheric oceans. Much of Africa, Russia, and Eurasia are also blank before gridding and extrapolation takes place.

The TLT data is a 3D, multi-domainal, depth-based representation of the lower tropospheric boundary. The surface data is a 2D representation of the skin temperature alone. Two totally different depictions.

 

Or it's diverging from TWO other datasets.  Both of said datasets are using the same fallible equipment and contain high uncertainties since they are not direct measurements; particularly in the tropics.  Based on the major adjustments in UAH in the past, it's a wonder that you defend it so viciously, while attempting to crap on a peer reviewed dataset run by a fantastic organization.

 

We will agree to disagree.  Debating you is so very fruitless.

Link to comment
Share on other sites

Actually, I made a mistake. RATPAC does nothing in the way of gridding or spatial homogenization at all. They merely take the data from the 85 stations and average it out. Wow..that's just an awful way to go about this.

(Keep in mind, this a bit old/when UAH and RSS were lacking homogeneity, unlike now).

http://www.met.reading.ac.uk/~swsshine/sparc4/Lanzante_SPARCTabard.ppt

Here's the station map. Look how much of the Pacific and Southern Oceans are just left blank. Hilarious.

8kJVcJ.jpg

Link to comment
Share on other sites

Actually, I made a mistake. RATPAC does nothing in the way of gridding or spatial homogenization at all. They merely take the data from the 85 stations and average it out. Wow..that's just an awful way to go about this.

(Keep in mind, this a bit old/when UAH and RSS were lacking homogeneity, unlike now).

http://www.met.reading.ac.uk/~swsshine/sparc4/Lanzante_SPARCTabard.ppt

Here's the station map. Look how much of the Pacific and Southern Oceans are just left blank. Hilarious.

8kJVcJ.jpg

 

First of all, 85 stations over such a long period is more than sufficient and the areal coverage looks reasonable. Accurate global averages have been constructed with far fewer stations.

 

Second, you are incorrect in your assertion that they do not do gridding or spatial homogenization. A very quick read of the RATPAC paper reveals there spatial homogenization procedure:

 

In an effort to obtain spatially unbiased large-scale means, we compensate for uneven longitudinal distribution of stations by creating regional means before averaging data into zonal bands. Each 30° zonal band was divided into three longitudinal regions of 120° each: 30°W to 90°E, 90°E to 150°W and 150°W to 30°W. Hemispheric (0–90°), tropical (30°S–30°N) and extratropical (30–90°) means were calculated from these zonal means, areally weighted using the cosine of the latitude of the midpoint of the zone, and the global mean was the average of the hemispheric means.

 

http://onlinelibrary.wiley.com/doi/10.1029/2005JD006169/full

 

 

 

All you are succeeding at is an obviously biased attempt to make false accusations and cheap shots at an otherwise reputable peer-reviewed source. The spatial homogenization technique had its own heading in the RATPAC paper! You didn't even bother to skim the paper before making this false accusation! Where is your credibility? Clearly you are on a tirade and don't care what the truth is at all or you would have actually read the paper you are critiquing. 

 

The AR5 assesses the uncertainty in RATPAC as similar to that of UAH and RSS (+/- .1C/decade). There are also a number of other independent radiosonde LT sources that use very different data and/or methods and conclude even greater warming than RATPAC finds.

 

UAH and RSS show the least warming out of at least 6+ different independent tropospheric sources, some of which are MSU based and some of which are radiosonde based. They are also fairly inconsistent with surface data, which is generally considered more reliable, and theoretical expectations of how the troposphere warms in relation to the surface over 35+ years. RSS and UAH have assessed uncertainies of +/- .1C/decade. This is the definition of outlier.

Link to comment
Share on other sites

First of all, 85 stations over such a long period is more than sufficient and the areal coverage looks reasonable. Accurate global averages have been constructed with far fewer stations.

Second, you are incorrect in your assertion that they do not do gridding or spatial homogenization. A very quick read of the RATPAC paper reveals there spatial homogenization procedure:

Wrong. What they're describing is a basic procedure to account for the fact that the stations are not equally distributed across the globe. The areas that are not being measured have zero weight in the data because there's no homogenization procedure to account for regional differentials like there is in GISS et al.

If you think this in an adequate measurement procedure, you're off your rocker. If you think the MSU data is bad, then this is 10X worse.

All you are succeeding at is an obviously biased attempt to make false accusations and cheap shots at an otherwise reputable peer-reviewed source. The spatial homogenization technique had its own heading in the RATPAC paper!

Apparently you lack the ability to read a paper's abstract without misinterpreting it. Try reading it again. The only "homogenization" being done is a distributive equalization procedure to normalize the sample across the globe. There are homogenizations done for diurnal & aerosol contamination, but these are difficult to correct for.

You didn't even bother to skim the paper before making this false accusation! Where is your credibility? Clearly you are on a tirade and don't care what the truth is at all or you would have actually read the paper you are critiquing.

Actually, I read the entire paper. Apparently you didn't because you're mischaracterizing the nature of the RATPAC adjustments.

The AR5 assesses the uncertainty in RATPAC as similar to that of UAH and RSS (+/- .1C/decade). There are also a number of other independent radiosonde LT sources that use very different data and/or methods and conclude even greater warming than RATPAC finds. UAH and RSS show the least warming out of at least 6+ different independent tropospheric sources, some of which are MSU based and some of which are radiosonde based.

There are three TLT based datasets, and four that extend into the TMT. Any other datasets in reference (IGRA and LKS) are not operational in nature. The sonde data that RATPAC uses is the same sonde data that IGRA and LKS use. They just use different homogenization procedures to account for diurnal contamination, etc.

They are also fairly inconsistent with surface data, which is generally considered more reliable, and theoretical expectations of how the troposphere warms in relation to the surface over 35+ years. RSS and UAH have assessed uncertainies of +/- .1C/decade. This is the definition of outlier.[/b]

You can be such a blockhead sometimes. The surface datasets do not measure in the TLT, and the theoretical TLT-surface relationship is quite fickle. A number of forcings and/or potential feedback can invert the relationship for extended periods of time.

Certainly, this isn't a physically plausible reason to favor one dataset over the other.

Link to comment
Share on other sites

Wrong. What they're describing is a basic procedure to account for the fact that the stations are not equally distributed across the globe. The areas that are not being measured have zero weight in the data because there's no homogenization procedure to account for regional differentials like there is in GISS et al.

If you think this in an adequate measurement procedure, you're off your rocker. If you think the MSU data is bad, then this is 10X worse.

Apparently you lack the ability to read a paper's abstract without misinterpreting it. Try reading it again. The only "homogenization" being done is a distributive equalization procedure to normalize the sample across the globe. There are homogenizations done for diurnal & aerosol contamination, but these are difficult to correct for.

Actually, I read the entire paper. Apparently you didn't because you're mischaracterizing the nature of the RATPAC adjustments.

 

 

Wrong. This is a spatial homogenization technique. It's a simple one and it ensures that no region of the globe is weighted too heavily due to having a higher number of stations.

 

You said specifically that they simply average the 85 stations together. This is a blatantly false lie. Everybody here can see that, so who are you trying to impress? They broke the globe into 36 gridboxes and averaged the 36 boxes and calculated an area weighted average of the gridboxes.

 
 
 

There are three TLT based datasets, and four that extend into the TMT. Any other datasets in reference (IGRA and LKS) are not operational in nature. The sonde data that RATPAC uses is the same sonde data that IGRA and LKS use. They just use different homogenization procedures to account for diurnal contamination, etc.

You can be such a blockhead sometimes. The surface datasets do not measure in the TLT, and the theoretical TLT-surface relationship is quite fickle. A number of forcings and/or potential feedback can invert the relationship for extended periods of time.

Certainly, this isn't a physically plausible reason to favor one dataset over the other.

 

This is false.

 

There are several studies using MSU data that provide results different (warmer) than UAH or RSS.

 

There is STAR (I said tropospheric not lower tropospheric).

 

There is RATPAC.

 

There is RICH, RAOBCORE, IUK and several others that also use sonde data. Some of them use entirely different data than RATPAC (wind data instead of temperature data) and are thus entirely independent. 

 

 

This is easily 6+ probably 10+ sources using independent methods and/or independent data. RSS and UAH show the least warming out of all of them. This is the definition of outlier.

 

 

 
 

You can be such a blockhead sometimes. The surface datasets do not measure in the TLT, and the theoretical TLT-surface relationship is quite fickle. A number of forcings and/or potential feedback can invert the relationship for extended periods of time.

Certainly, this isn't a physically plausible reason to favor one dataset over the other.

 

 

We're not talking about 15-20 years here. We're talking about 36 years. Over 36 years it is extremely unlikely the lower troposphere would warm slower than the surface. You haven't provided a shred of evidence to the contrary. All you're doing is trying to muddy the waters.

Link to comment
Share on other sites

Wrong. This is a spatial homogenization technique. It's a simple one and it ensures that no region of the globe is weighted too heavily due to having a higher number of stations.

That's not a spatial homogenization procedure, it's a spatial normalization procedure accounting for the uneven distribution of stations. It's not a homogenization unless they're changing or extrapolating the data itself to account for factors either external to the dataset or errors in the data itself. They're not extrapolating or homogenizing data between stations like GISS/NCDC do.

You said specifically that they simply average the 85 stations together. This is a blatantly false lie. Everybody here can see that, so who are you trying to impress? They broke the globe into 36 gridboxes and averaged the 36 boxes and calculated an area weighted average of the gridboxes.

Do you understand what homogenization means/implies? I said they're doing no spatial data homogenization. Obviously they're weighting/normalizing the data. That's something different.

There are several studies using MSU data that provide results different (warmer) than UAH or RSS.

There are older papers that reached this conclusion regarding older versions of UAH and RSS. There are no peer reviewed studies critiquing the current versions of these datasets in the TLT layer, because any errors in the TLT data appear to be minor.

There is STAR (I said tropospheric not lower tropospheric).

As you noted, STAR doesn't measure in the TLT. If you want to start a TMT discussion, that's fine. Measurement becomes more difficult with altitude as atmospheric density declines, so there's reason to be skeptical of the upper air data.

There is RATPAC. There is RICH, RAOBCORE, IUK and several others that also use sonde data. Some of them use entirely different data than RATPAC (wind data instead of temperature data) and are thus entirely independent.

Just like UAH/RSS/STAR utilize the same MSU/AMSU data to interpret temperatures, the radiosonde aggregations rely on the same data too. Radiosondes are launched around the world at 00UTC and 12UTC to assist in weather modeling/forecasting, and these radiosondes measure multiple phenomenon at once (temps/dewpoint/wind/etc).

This is where the data comes from. There are no radiosondes being launched just for RATPAC, or just for RAOBCORE. The raw data is publicly available on the NOAA site, for goodness sakes.

This is easily 6+ probably 10+ sources using independent methods and/or independent data. RSS and UAH show the least warming out of all of them. This is the definition of outlier.

Few of these are operationally maintained. Again, these are just different interpretations of the same data with the same problems/shortcomings.

We're not talking about 15-20 years here. We're talking about 36 years. Over 36 years it is extremely unlikely the lower troposphere would warm slower than the surface. You haven't provided a shred of evidence to the contrary. All you're doing is trying to muddy the waters.

Looks like I'm going to have to do your research for you, again.

The (theoretical) TLT-surface relationship is based on modeling depicting an increase in both latent heat release and molecular line broadening in the mid/upper troposphere as surface evaporation/H^2O content increases with AGW. Relatively speaking, latent heat release in the upper troposphere is modeled to increase at a faster rate than it will near the surface.

However, the problem is, there are many factors that can counteract and/or invert this relationship, assuming we even understand the macroscale feedbacks that govern it to begin with.

For example, this fickle and largely theoretical relationship can be contracted or reversed by a reduction in near-surface winds. This would reduce surface evaporation and subsequent latent heat release in the troposphere, and would lead to an accelerated warming of the oceans and planetary surface. This can be accomplished through both a weakening of the equator-pole thermal gradient, or a broadening/weakening of the Hadley Cells.

Link to comment
Share on other sites

That's not a spatial homogenization procedure, it's a spatial normalization procedure accounting for the uneven distribution of stations. It's not a homogenization unless they're changing or extrapolating the data itself to account for factors either external to the dataset or errors in the data itself. They're not extrapolating or homogenizing data between stations like GISS/NCDC do.

Do you understand what homogenization means/implies? I said they're doing no spatial data homogenization. Obviously they're weighting/normalizing the data. That's something different.

 

 

 

Maybe you should look up the definition of homogenization. Homogenization simply means, very broadly, the removal of non-climactic signals. It does not mean in any way anything specific such as infilling the data. Any attempt to accurately spatially weight the data would be a spatial homogenization. What RATPAC does is spatial homogenization. 

 

Moreover, you said specifically that they simply average the 85 stations. That is false.

Link to comment
Share on other sites

Maybe you should look up the definition of homogenization. Homogenization simply means, very broadly, the removal of non-climactic signals. It does not mean in any way anything specific such as infilling the data. Any attempt to accurately spatially weight the data would be a spatial homogenization. What RATPAC does is spatial homogenization.

Moreover, you said specifically that they simply average the 85 stations. That is false.

Exactly. The difference is RATPAC isn't changing or removing anything from the data. They're normalizing the areal plane into large grid boxes to account for uneven station distribution. The data itself is not extrapolated through or between grids on a gradient, as GISS et al do. The data is still not necessarily spatially representative and may not depict regionally biased climate change well.

The radiosondes themselves can also suffer from inhomogeneity diurnal contamination. If anything, they're more insufferable than the satellite data on aggregation. They require so much more quality control than the satellite data..each individual radiosonde needs to be tuned/adjusted for a slew of potential contaminations. When I was a tech intern at NOAA/CPC, I had to apply corrections to several sondes for diurnal contamination. The entire procedure is strenuous and riddled with uncertainty, even when the contamination is obvious.

Link to comment
Share on other sites

Exactly. The difference is RATPAC isn't changing or removing anything from the data. They're normalizing the areal plane into large grid boxes to account for uneven station distribution. The data itself is not extrapolated through or between grids on a gradient, as GISS et al do. The data is still not necessarily spatially representative and may not depict regionally biased climate change well.

The radiosondes themselves can also suffer from inhomogeneity and suffer from diurnal bias. If anything, they're more insufferable than the satellite data on aggregation. They require so much more quality control than the satellite data..each individual radiosonde needs to be tuned/adjusted for a slew of potential contaminations.

 

In other words, your original statement that they average the 85 stations was false. They spatially homogenize the data by creating regional gridboxes and averaging the gridboxes. The method of spatial homogenization is different than GISS, but is simply another way of removing some of the spatial inhomogeneities.

Link to comment
Share on other sites

In other words, your original statement that they average the 85 stations was false. They spatially homogenize the data by creating regional gridboxes and averaging the gridboxes.

That's not homogenizing, that's normalizing. Datasets like GISS and NCDC actually homogenize/extrapolate the gridded data through and between grids for continuity and spatial representation.

Call it whatever you want at this point. I don't really care, and I'm tired of debating this.

Link to comment
Share on other sites

That's not homogenizing, that's normalizing. Datasets like GISS and NCDC actually homogenize/extrapolate the gridded data through and between grids for continuity and spatial representation.

Call it whatever you want at this point. I don't really care, and I'm tired of debating this.

 

You said that they simply average all 85 stations and there is no area weighting. That is false. Why won't you acknowledge this?

 

Second of all, it is homogenizing. Homogenizing is, broadly, the removal of non-climactic signals. Any attempt to area weight the data is the removal of a non-climactic signal. I don't really care about the terminology either. But when you use big words you should know what they mean. Most of your posts are an abuse of the english language.

 

It's actually not "normalizing" either. Look up the definition of normalizing. You can call it spatial homogenization, or an area weighted average. But the term 'normalizing' is not descriptive at all in this case and appears to be an unnecessary use of a big word.

Link to comment
Share on other sites

Weighting grid points to account for the areal coverage of the grid point is absolutely not called "normalization". I do not know for sure that it is called "homogenization", but I believe that it is.

 

Normalization is, at its most basic, scaling variables by their standard deviations to allow for statistical intercomparisons.

 

Weighting is a simple form of extrapolation. It is mathematically identical to extrapolating a single value to a larger continuous region, doing that for each data point, and then taking an average (via integration) over the whole globe.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...