Jump to content

StudentOfClimatology

Members
  • Posts

    4,124
  • Joined

  • Last visited

Everything posted by StudentOfClimatology

  1. No, you misinterpreted me because you're misinterpreting the definition of homogenization. I didn't say they did no gridding, rather I said there is no homogenization (spatial or to the grids) procedure carried out. Merely placing data into grid cells to account for uneven spatial distribution of measurement stations is not considered "homogenization" because there is no faulty, unrepresentative data in the aggregate itself. The areal plane of measurement is being extrapolated..the data itself is not being changed in any way. I said they do no homogenization of the grid network, or in other words, a smoothing/extrapolation of data between/within the grid cells to reflect variability over distance. See below: "Average it out", as in, over distance in equally-sized, full-panning grid boxes. I don't know why I even have to explain this.
  2. I appreciate the heads up. I wasn't aware that I was coming off negatively, but I'll try to turn down the dial in the future.
  3. I respect and enjoy learning from everyone here, minus two, hailman being one of them. Nothing too personal. I love you, skier, mallow, ORH, and TGW, regardless of the disagreements we have. I'm sure it's not mutual, but that doesn't matter to me. The fact that I can discuss climate science with others who are just as into it as me is as rewarding as it gets.
  4. Please do. I'd rather not have to reply to you, no offense.
  5. Thanks for the heads up. Yeah I'm pretty sure geographic normalization is a real term (it was taught in one of my undergrad paleo/geo classes years ago). Definitely not the same thing as a statistical normalization.
  6. Where did I say they did no areal weighting? What I said was that they don't do anything in the way of spatial/grid homogenization, like GISS/NCDC et al do to make the data representative of reality. The RATPAC data is just calculated in a field of large grid boxes. How is that homogenizing? You're not removing anything from the data or measurements. Homogenizing is the process of removing faulty/contaminated data from individual radiosondes or stations due to factors internal or external to the instrument itself. Gridding the data isn't "homogenizing" it.
  7. I'm pretty sure it's not homogenization. I thought geographic normalization was a form of weighting, just sort of reversed? I believe used I've used both terms interchangeably, but if I'm wrong that's fine (I'm not a geography major). The point I was making is unchanged. http://dauofu.blogspot.com/2013/02/normalizing-geographic-data.html?m=1
  8. That's not homogenizing, that's normalizing. Datasets like GISS and NCDC actually homogenize/extrapolate the gridded data through and between grids for continuity and spatial representation. Call it whatever you want at this point. I don't really care, and I'm tired of debating this.
  9. Exactly. The difference is RATPAC isn't changing or removing anything from the data. They're normalizing the areal plane into large grid boxes to account for uneven station distribution. The data itself is not extrapolated through or between grids on a gradient, as GISS et al do. The data is still not necessarily spatially representative and may not depict regionally biased climate change well. The radiosondes themselves can also suffer from inhomogeneity diurnal contamination. If anything, they're more insufferable than the satellite data on aggregation. They require so much more quality control than the satellite data..each individual radiosonde needs to be tuned/adjusted for a slew of potential contaminations. When I was a tech intern at NOAA/CPC, I had to apply corrections to several sondes for diurnal contamination. The entire procedure is strenuous and riddled with uncertainty, even when the contamination is obvious.
  10. That's not a spatial homogenization procedure, it's a spatial normalization procedure accounting for the uneven distribution of stations. It's not a homogenization unless they're changing or extrapolating the data itself to account for factors either external to the dataset or errors in the data itself. They're not extrapolating or homogenizing data between stations like GISS/NCDC do. Do you understand what homogenization means/implies? I said they're doing no spatial data homogenization. Obviously they're weighting/normalizing the data. That's something different. There are older papers that reached this conclusion regarding older versions of UAH and RSS. There are no peer reviewed studies critiquing the current versions of these datasets in the TLT layer, because any errors in the TLT data appear to be minor. As you noted, STAR doesn't measure in the TLT. If you want to start a TMT discussion, that's fine. Measurement becomes more difficult with altitude as atmospheric density declines, so there's reason to be skeptical of the upper air data. Just like UAH/RSS/STAR utilize the same MSU/AMSU data to interpret temperatures, the radiosonde aggregations rely on the same data too. Radiosondes are launched around the world at 00UTC and 12UTC to assist in weather modeling/forecasting, and these radiosondes measure multiple phenomenon at once (temps/dewpoint/wind/etc). This is where the data comes from. There are no radiosondes being launched just for RATPAC, or just for RAOBCORE. The raw data is publicly available on the NOAA site, for goodness sakes. Few of these are operationally maintained. Again, these are just different interpretations of the same data with the same problems/shortcomings. Looks like I'm going to have to do your research for you, again. The (theoretical) TLT-surface relationship is based on modeling depicting an increase in both latent heat release and molecular line broadening in the mid/upper troposphere as surface evaporation/H^2O content increases with AGW. Relatively speaking, latent heat release in the upper troposphere is modeled to increase at a faster rate than it will near the surface. However, the problem is, there are many factors that can counteract and/or invert this relationship, assuming we even understand the macroscale feedbacks that govern it to begin with. For example, this fickle and largely theoretical relationship can be contracted or reversed by a reduction in near-surface winds. This would reduce surface evaporation and subsequent latent heat release in the troposphere, and would lead to an accelerated warming of the oceans and planetary surface. This can be accomplished through both a weakening of the equator-pole thermal gradient, or a broadening/weakening of the Hadley Cells.
  11. Wrong. What they're describing is a basic procedure to account for the fact that the stations are not equally distributed across the globe. The areas that are not being measured have zero weight in the data because there's no homogenization procedure to account for regional differentials like there is in GISS et al. If you think this in an adequate measurement procedure, you're off your rocker. If you think the MSU data is bad, then this is 10X worse. Apparently you lack the ability to read a paper's abstract without misinterpreting it. Try reading it again. The only "homogenization" being done is a distributive equalization procedure to normalize the sample across the globe. There are homogenizations done for diurnal & aerosol contamination, but these are difficult to correct for. Actually, I read the entire paper. Apparently you didn't because you're mischaracterizing the nature of the RATPAC adjustments. There are three TLT based datasets, and four that extend into the TMT. Any other datasets in reference (IGRA and LKS) are not operational in nature. The sonde data that RATPAC uses is the same sonde data that IGRA and LKS use. They just use different homogenization procedures to account for diurnal contamination, etc. You can be such a blockhead sometimes. The surface datasets do not measure in the TLT, and the theoretical TLT-surface relationship is quite fickle. A number of forcings and/or potential feedback can invert the relationship for extended periods of time. Certainly, this isn't a physically plausible reason to favor one dataset over the other.
  12. Actually, I made a mistake. RATPAC does nothing in the way of gridding or spatial homogenization at all. They merely take the data from the 85 stations and average it out. Wow..that's just an awful way to go about this. (Keep in mind, this a bit old/when UAH and RSS were lacking homogeneity, unlike now). http://www.met.reading.ac.uk/~swsshine/sparc4/Lanzante_SPARCTabard.ppt Here's the station map. Look how much of the Pacific and Southern Oceans are just left blank. Hilarious.
  13. Furthermore, RATPAC only uses 85 stations, most of which are land-based and in the NH. http://www1.ncdc.noaa.gov/pub/data/images/Ratpac-datasource.docx I don't think people realize just how much homogenization needs to be done just to correct for diurnal bias, aerosol contamination, and spatial inefficiency. This is not a rigorous dataset.
  14. What the surface data is doing is irrelevant. RATPAC is diverging from all other TLT datasets because it's not measuring over large portions of the globe, particularly over the equatorial & Southern Hemispheric oceans. Much of Africa, Russia, and Eurasia are also blank before gridding and extrapolation takes place. The TLT data is a 3D, multi-domainal, depth-based representation of the lower tropospheric boundary. The surface data is a 2D representation of the skin temperature alone. Two totally different depictions.
  15. RATPAC has poor spatial coverage and a weak resolution even after the data is gridded. Huge areas of open ocean and uninhabited landmass are left blank before the gridding and homogenization process. A vast majority of radiosondes are launched over North America, Europe, and E/SE Asia. That's about 15-20% of the globe. The surface datasets should even be compared to the TLT data because they're not measuring there and the two domains can diverge significantly on a decadal scale. They have no deterministic value in the TLT.
  16. You hit the nail on the head with this. There are three LT-based datasets, and four that measure into the MT. Attempting to use the surface data to diagnose *potential* error in the LT data is not a physically sound approach.
  17. Not necessarily. There's a lot more involved than just water vapor feedback when it comes to TLT warming relative to the surface, at least on a sub-centennial timescale. Changes to the BDC, Hadley Cells, surface wind speeds, or cloud height(s) can significantly amplify or dampen trends in the TLT for periods up to and over 20 years. Sure, but there's no way you can gauge TLT sensitivity using just 35yrs of data. There's lots of inherent variability underlying the AGW signal in the TLT, more so than at the surface. Eventually the surface and TLT will meet up somewhere. How many of them are operational, mainstream datasets? I can think of four, three of which are applicable in the TLT.
  18. STAR is TMT only. There are only three viable datasets that measure in the TLT, and two happen to agree very well with one another. The dataset that happens to diverge (RATPAC) requires substantially more quality control and spatial extrapolation. Uh, what? This is not necessarily true at all, particularly on a sub-centennial resolution. A slight change in cloud height or relative latent heat release w/ time can invert the surface-TLT relationship, and visa versa. You mean the "evidence" that you're inventing on the fly? Sure.
  19. Sondes aren't sensors, sondes contain sensors. He said RATPAC uses "several sensors" and the AMSU/MSU networks use "one sensor". That's laughably false. We're debating stuff than needn't be debated.
  20. Yeah, seven radiometers, individually calibrated based on unique physical specifications. How does that jive with what you said earlier?
  21. Of course. Each radiometer is calibrated seperately based on it's orbital parameters and observed bias(es), if any.
  22. Google scholar. Need me do this for you, too?
  23. The satellite datasets are not sufficiently long enough to be used for ECS/TCR calculations at this time. I'd use an aggregation of all surface station datasets to determine ESC/TCR at the surface. Sensitivity in the TLT/TMT cannot be determined using the surface networks.
  24. It's RATPAC that's diverging from UAH/RSS. Again, the surface datasets and satellites do not measure within the same domain(s), so you can't compare them on the scale you are. Different physical processes govern temperature trend(s) between these two boundaries on both interdecadal and decadal timescales. So when it comes to the lower troposphere, you have three viable datasets, and it so happens that the two most spatially adequate ones agree with one another.
×
×
  • Create New...