Jump to content

StudentOfClimatology

Members
  • Posts

    4,124
  • Joined

  • Last visited

Posts posted by StudentOfClimatology

  1. Right here. You said they "merely take the data from the 85 stations and average it out." You also said they "do nothing in the way of gridding." Both statements are blatantly false. They use the data to define trends for 36 grids and then do an area-weighted average of those 36 grids.

    No, you misinterpreted me because you're misinterpreting the definition of homogenization. I didn't say they did no gridding, rather I said there is no homogenization (spatial or to the grids) procedure carried out. Merely placing data into grid cells to account for uneven spatial distribution of measurement stations is not considered "homogenization" because there is no faulty, unrepresentative data in the aggregate itself. The areal plane of measurement is being extrapolated..the data itself is not being changed in any way.

    I said they do no homogenization of the grid network, or in other words, a smoothing/extrapolation of data between/within the grid cells to reflect variability over distance.

    See below:

    What I said was that they don't do anything in the way of spatial/grid homogenization, like GISS/NCDC et al do to make the data representative of reality. The RATPAC data is just calculated in a field of large grid boxes

    Actually, I made a mistake. RATPAC does nothing in the way of gridding or spatial homogenization at all. They merely take the data from the 85 stations and average it out. Wow..that's just an awful way to go about this.

    "Average it out", as in, over distance in equally-sized, full-panning grid boxes. I don't know why I even have to explain this.
  2. Who would you care to reply to? There are not that many active posters on here.

    I respect and enjoy learning from everyone here, minus two, hailman being one of them. Nothing too personal.

    I love you, skier, mallow, ORH, and TGW, regardless of the disagreements we have. I'm sure it's not mutual, but that doesn't matter to me. The fact that I can discuss climate science with others who are just as into it as me is as rewarding as it gets.

  3. I've never heard the term "geographic normalization" before. It's possible that it is a term used in geography, though I suspect the blog you linked uses the term "normalization" merely as a descriptive term (rather than a mathematical/scientific one). Either way, in the field of meteorology/climatology, as far as I'm aware, normalization refers to the statistical definition.

    Thanks for the heads up. Yeah I'm pretty sure geographic normalization is a real term (it was taught in one of my undergrad paleo/geo classes years ago). Definitely not the same thing as a statistical normalization.

  4. You said that they simply average all 85 stations and there is no area weighting. That is false. Why won't you acknowledge this?

    Where did I say they did no areal weighting? What I said was that they don't do anything in the way of spatial/grid homogenization, like GISS/NCDC et al do to make the data representative of reality. The RATPAC data is just calculated in a field of large grid boxes.

    Second of all, it is homogenizing. Homogenizing is, broadly, the removal of non-climactic signals. Any attempt to area weight the data is the removal of a non-climactic signal.

    How is that homogenizing? You're not removing anything from the data or measurements. Homogenizing is the process of removing faulty/contaminated data from individual radiosondes or stations due to factors internal or external to the instrument itself. Gridding the data isn't "homogenizing" it.

  5. Weighting grid points to account for the areal coverage of the grid point is absolutely not called "normalization". I do not know for sure that it is called "homogenization", but I believe that it is.

    Normalization is, at its most basic, scaling variables by their standard deviations to allow for statistical intercomparisons.

    Weighting is a simple form of extrapolation. It is mathematically identical to extrapolating a single value to a larger continuous region, doing that for each data point, and then taking an average (via integration) over the whole globe.

    I'm pretty sure it's not homogenization.

    I thought geographic normalization was a form of weighting, just sort of reversed? I believe used I've used both terms interchangeably, but if I'm wrong that's fine (I'm not a geography major). The point I was making is unchanged.

    http://dauofu.blogspot.com/2013/02/normalizing-geographic-data.html?m=1

  6. In other words, your original statement that they average the 85 stations was false. They spatially homogenize the data by creating regional gridboxes and averaging the gridboxes.

    That's not homogenizing, that's normalizing. Datasets like GISS and NCDC actually homogenize/extrapolate the gridded data through and between grids for continuity and spatial representation.

    Call it whatever you want at this point. I don't really care, and I'm tired of debating this.

  7. Maybe you should look up the definition of homogenization. Homogenization simply means, very broadly, the removal of non-climactic signals. It does not mean in any way anything specific such as infilling the data. Any attempt to accurately spatially weight the data would be a spatial homogenization. What RATPAC does is spatial homogenization.

    Moreover, you said specifically that they simply average the 85 stations. That is false.

    Exactly. The difference is RATPAC isn't changing or removing anything from the data. They're normalizing the areal plane into large grid boxes to account for uneven station distribution. The data itself is not extrapolated through or between grids on a gradient, as GISS et al do. The data is still not necessarily spatially representative and may not depict regionally biased climate change well.

    The radiosondes themselves can also suffer from inhomogeneity diurnal contamination. If anything, they're more insufferable than the satellite data on aggregation. They require so much more quality control than the satellite data..each individual radiosonde needs to be tuned/adjusted for a slew of potential contaminations. When I was a tech intern at NOAA/CPC, I had to apply corrections to several sondes for diurnal contamination. The entire procedure is strenuous and riddled with uncertainty, even when the contamination is obvious.

  8. Wrong. This is a spatial homogenization technique. It's a simple one and it ensures that no region of the globe is weighted too heavily due to having a higher number of stations.

    That's not a spatial homogenization procedure, it's a spatial normalization procedure accounting for the uneven distribution of stations. It's not a homogenization unless they're changing or extrapolating the data itself to account for factors either external to the dataset or errors in the data itself. They're not extrapolating or homogenizing data between stations like GISS/NCDC do.

    You said specifically that they simply average the 85 stations together. This is a blatantly false lie. Everybody here can see that, so who are you trying to impress? They broke the globe into 36 gridboxes and averaged the 36 boxes and calculated an area weighted average of the gridboxes.

    Do you understand what homogenization means/implies? I said they're doing no spatial data homogenization. Obviously they're weighting/normalizing the data. That's something different.

    There are several studies using MSU data that provide results different (warmer) than UAH or RSS.

    There are older papers that reached this conclusion regarding older versions of UAH and RSS. There are no peer reviewed studies critiquing the current versions of these datasets in the TLT layer, because any errors in the TLT data appear to be minor.

    There is STAR (I said tropospheric not lower tropospheric).

    As you noted, STAR doesn't measure in the TLT. If you want to start a TMT discussion, that's fine. Measurement becomes more difficult with altitude as atmospheric density declines, so there's reason to be skeptical of the upper air data.

    There is RATPAC. There is RICH, RAOBCORE, IUK and several others that also use sonde data. Some of them use entirely different data than RATPAC (wind data instead of temperature data) and are thus entirely independent.

    Just like UAH/RSS/STAR utilize the same MSU/AMSU data to interpret temperatures, the radiosonde aggregations rely on the same data too. Radiosondes are launched around the world at 00UTC and 12UTC to assist in weather modeling/forecasting, and these radiosondes measure multiple phenomenon at once (temps/dewpoint/wind/etc).

    This is where the data comes from. There are no radiosondes being launched just for RATPAC, or just for RAOBCORE. The raw data is publicly available on the NOAA site, for goodness sakes.

    This is easily 6+ probably 10+ sources using independent methods and/or independent data. RSS and UAH show the least warming out of all of them. This is the definition of outlier.

    Few of these are operationally maintained. Again, these are just different interpretations of the same data with the same problems/shortcomings.

    We're not talking about 15-20 years here. We're talking about 36 years. Over 36 years it is extremely unlikely the lower troposphere would warm slower than the surface. You haven't provided a shred of evidence to the contrary. All you're doing is trying to muddy the waters.

    Looks like I'm going to have to do your research for you, again.

    The (theoretical) TLT-surface relationship is based on modeling depicting an increase in both latent heat release and molecular line broadening in the mid/upper troposphere as surface evaporation/H^2O content increases with AGW. Relatively speaking, latent heat release in the upper troposphere is modeled to increase at a faster rate than it will near the surface.

    However, the problem is, there are many factors that can counteract and/or invert this relationship, assuming we even understand the macroscale feedbacks that govern it to begin with.

    For example, this fickle and largely theoretical relationship can be contracted or reversed by a reduction in near-surface winds. This would reduce surface evaporation and subsequent latent heat release in the troposphere, and would lead to an accelerated warming of the oceans and planetary surface. This can be accomplished through both a weakening of the equator-pole thermal gradient, or a broadening/weakening of the Hadley Cells.

  9. First of all, 85 stations over such a long period is more than sufficient and the areal coverage looks reasonable. Accurate global averages have been constructed with far fewer stations.

    Second, you are incorrect in your assertion that they do not do gridding or spatial homogenization. A very quick read of the RATPAC paper reveals there spatial homogenization procedure:

    Wrong. What they're describing is a basic procedure to account for the fact that the stations are not equally distributed across the globe. The areas that are not being measured have zero weight in the data because there's no homogenization procedure to account for regional differentials like there is in GISS et al.

    If you think this in an adequate measurement procedure, you're off your rocker. If you think the MSU data is bad, then this is 10X worse.

    All you are succeeding at is an obviously biased attempt to make false accusations and cheap shots at an otherwise reputable peer-reviewed source. The spatial homogenization technique had its own heading in the RATPAC paper!

    Apparently you lack the ability to read a paper's abstract without misinterpreting it. Try reading it again. The only "homogenization" being done is a distributive equalization procedure to normalize the sample across the globe. There are homogenizations done for diurnal & aerosol contamination, but these are difficult to correct for.

    You didn't even bother to skim the paper before making this false accusation! Where is your credibility? Clearly you are on a tirade and don't care what the truth is at all or you would have actually read the paper you are critiquing.

    Actually, I read the entire paper. Apparently you didn't because you're mischaracterizing the nature of the RATPAC adjustments.

    The AR5 assesses the uncertainty in RATPAC as similar to that of UAH and RSS (+/- .1C/decade). There are also a number of other independent radiosonde LT sources that use very different data and/or methods and conclude even greater warming than RATPAC finds. UAH and RSS show the least warming out of at least 6+ different independent tropospheric sources, some of which are MSU based and some of which are radiosonde based.

    There are three TLT based datasets, and four that extend into the TMT. Any other datasets in reference (IGRA and LKS) are not operational in nature. The sonde data that RATPAC uses is the same sonde data that IGRA and LKS use. They just use different homogenization procedures to account for diurnal contamination, etc.

    They are also fairly inconsistent with surface data, which is generally considered more reliable, and theoretical expectations of how the troposphere warms in relation to the surface over 35+ years. RSS and UAH have assessed uncertainies of +/- .1C/decade. This is the definition of outlier.[/b]

    You can be such a blockhead sometimes. The surface datasets do not measure in the TLT, and the theoretical TLT-surface relationship is quite fickle. A number of forcings and/or potential feedback can invert the relationship for extended periods of time.

    Certainly, this isn't a physically plausible reason to favor one dataset over the other.

  10. Actually, I made a mistake. RATPAC does nothing in the way of gridding or spatial homogenization at all. They merely take the data from the 85 stations and average it out. Wow..that's just an awful way to go about this.

    (Keep in mind, this a bit old/when UAH and RSS were lacking homogeneity, unlike now).

    http://www.met.reading.ac.uk/~swsshine/sparc4/Lanzante_SPARCTabard.ppt

    Here's the station map. Look how much of the Pacific and Southern Oceans are just left blank. Hilarious.

    8kJVcJ.jpg

  11. And yet it very closely matches the 60 year trend of all the surface datasets. Why is that? Resolution is not as large of a factor over a long period. This has been explained many times.

    What the surface data is doing is irrelevant.

    RATPAC is diverging from all other TLT datasets because it's not measuring over large portions of the globe, particularly over the equatorial & Southern Hemispheric oceans. Much of Africa, Russia, and Eurasia are also blank before gridding and extrapolation takes place.

    The TLT data is a 3D, multi-domainal, depth-based representation of the lower tropospheric boundary. The surface data is a 2D representation of the skin temperature alone. Two totally different depictions.

  12. The answer is in the uncertainty defined by each dataset. And it's a fact that surface datasets have lower uncertainties than their satillite counterparts in their respective domains. That, coupled with the fact that humans live at the surface, inherently makes the surface datasets more relevant and accurate than the MSU products for measuring global warming, IMO.

    While there is no magic bullet, since RATPAC accurately emulates the sfc datasets should give at least a bit more confidence that it's probably a relatively accurate dataset aloft. I'm not sure why that's in dispute. Remember RSS and UAH use the same equipment from the same sensors to measure temperature. It's very possible that both are very much in need of orbital drift correction (among others). RATPAC is an independent source of hundreds of sondes unrelated to the GNCH v3 datasets.

    RATPAC has poor spatial coverage and a weak resolution even after the data is gridded. Huge areas of open ocean and uninhabited landmass are left blank before the gridding and homogenization process. A vast majority of radiosondes are launched over North America, Europe, and E/SE Asia. That's about 15-20% of the globe.

    The surface datasets should even be compared to the TLT data because they're not measuring there and the two domains can diverge significantly on a decadal scale. They have no deterministic value in the TLT.

  13. This is not a fair statement. How many sources do we have actually measuring LT temps? And yet you claim the satellite sources are "outliers".

    Being inconsistent with the theory does not make a source an outlier, as Will explained above.

    You hit the nail on the head with this.

    There are three LT-based datasets, and four that measure into the MT. Attempting to use the surface data to diagnose *potential* error in the LT data is not a physically sound approach.

  14. If the TLT trend is .13C/decade, it would imply one of either two things

    1) The surface trend is .11C/decade (following the predicted surface-TLT relationship)

    or

    2) The predicted surface-TLT relationship is wrong, the water feedback is non-existent or negative instead of very positive as expected, and climate sensitivity is much less than expected.

    In both cases, climate sensitivity is much less than expected.

    Not necessarily. There's a lot more involved than just water vapor feedback when it comes to TLT warming relative to the surface, at least on a sub-centennial timescale. Changes to the BDC, Hadley Cells, surface wind speeds, or cloud height(s) can significantly amplify or dampen trends in the TLT for periods up to and over 20 years.

    A TLT trend of .13C/decade implies low climate sensitivity no matter how you look at it.

    Sure, but there's no way you can gauge TLT sensitivity using just 35yrs of data. There's lots of inherent variability underlying the AGW signal in the TLT, more so than at the surface. Eventually the surface and TLT will meet up somewhere.

    This is false. There are many sources that use MSU data to measure TLT. UAH and RSS both require huge amounts of quality control and adjustment so I am not sure how you can say they require quantitatively less than RATPAC.

    How many of them are operational, mainstream datasets? I can think of four, three of which are applicable in the TLT.

  15. The long-term trend between RSS/UAH vs other satellite (STAR and several other peer-reviewed criticisms of UAH/RSS), RATPAC, and surface data is more than noise and is scientifically significant.

    STAR is TMT only. There are only three viable datasets that measure in the TLT, and two happen to agree very well with one another. The dataset that happens to diverge (RATPAC) requires substantially more quality control and spatial extrapolation.

    The RSS trend since 1979 is .13C/decade. This would imply a surface trend of around .11C/decade. The measured surface trend is .17C/decade. This is 50% more than implied by RSS. This is very significant scientifically and has serious implications for climate sensitivity.

    :huh:

    Uh, what? This is not necessarily true at all, particularly on a sub-centennial resolution.

    A slight change in cloud height or relative latent heat release w/ time can invert the surface-TLT relationship, and visa versa.

    The balance of evidence suggests UAH/RSS is in the minority, and is more prone to error given how susceptible the results are to arbitrary choices in methodology.

    You mean the "evidence" that you're inventing on the fly? Sure.

  16. Funny you had to explain that one to him. So much bias he will say just about anything to 'win.'

    Sondes aren't sensors, sondes contain sensors.

    He said RATPAC uses "several sensors" and the AMSU/MSU networks use "one sensor". That's laughably false. We're debating stuff than needn't be debated.

  17. Sondes are sensors....

    Okay, so let's assume for a hot second you are not a skeptic. What dataset do you trust the most for empirical TCR and ECS calculations?

    The satellite datasets are not sufficiently long enough to be used for ECS/TCR calculations at this time. I'd use an aggregation of all surface station datasets to determine ESC/TCR at the surface. Sensitivity in the TLT/TMT cannot be determined using the surface networks.

  18. Just because a data set requires more work to complete the product doesn't mean that it isn't reliable. The measure of reliability is whether it fits other measured data sets. RATPAC does.

    It's RATPAC that's diverging from UAH/RSS.

    Again, the surface datasets and satellites do not measure within the same domain(s), so you can't compare them on the scale you are. Different physical processes govern temperature trend(s) between these two boundaries on both interdecadal and decadal timescales. So when it comes to the lower troposphere, you have three viable datasets, and it so happens that the two most spatially adequate ones agree with one another.

×
×
  • Create New...