Jump to content
  • Member Statistics

    18,659
    Total Members
    25,819
    Most Online
    Donut Hole
    Newest Member
    Donut Hole
    Joined

Occasional Thoughts on Climate Change


donsutherland1
 Share

Recommended Posts

16 minutes ago, TheClimateChanger said:

Sorry, I don't have time to write my own posts. And ChatGPT headlines get far more engagement than any I could create.

 

27 minutes ago, Cobalt said:

You are very committed to discussing all facets of climate change, and I admire your effort. However, would you possibly be up for moving away from AI usage in your posts? I can't help but see all the hallmarks of it throughout your Twitter page. A lot of it undermines your overall message, and I'm not talking from the energy consumption standpoint, but instead just the overall strength of the rhetoric you and consequently ChatGPT use. 

Don't get me wrong, though - I get where you are coming from. But I'm just doing this as a hobby and competing against a steady stream of dis- and misinformation. There are accounts that are actually PAID big bucks just to spread climate disinformation. For an unpaid hobbyist to compete against a career liars, AI is an absolute must.

  • Disagree 1
  • no 1
Link to comment
Share on other sites

10 hours ago, TheClimateChanger said:

Sorry, I don't have time to write my own posts. And ChatGPT headlines get far more engagement than any I could create.

This is simply not true. ChatGPT is a far worse science communicator than many out there. Its writing habits are also quite predictable, to a point where a still-substantial portion of the general public can sense that the vibes are “off” and stop interfacing with the posts entirely.

 

My suggestion would be to narrow the scope of your content and try writing headlines more gripping and information-rich than what ChatGPT can do. It is very much possible, and the reward is that, with enough effort, your unique posts have potential to reach further than 4x the amount of derivative AI posts would be capable of. I guarantee it. 

  • 100% 1
Link to comment
Share on other sites

On 5/8/2026 at 5:22 PM, Cobalt said:

This is simply not true. ChatGPT is a far worse science communicator than many out there. Its writing habits are also quite predictable, to a point where a still-substantial portion of the general public can sense that the vibes are “off” and stop interfacing with the posts entirely.

 

My suggestion would be to narrow the scope of your content and try writing headlines more gripping and information-rich than what ChatGPT can do. It is very much possible, and the reward is that, with enough effort, your unique posts have potential to reach further than 4x the amount of derivative AI posts would be capable of. I guarantee it. 

There is actually one paper that suggests that AI-generated headlines may garner greater attention.

https://www.mdpi.com/2673-5172/5/4/110

This concerns me. My worry is that sometimes the actual nuance is lost in the AI-generated content. As a result, even as the headlines might be catchy, those headlines could be somewhat inconsistent with the content. That creates a credibility gap (which already occurs with "clickbait" headlines). I am not aware of research on that aspect of AI usage.

I remain an advocate of AI, but suggest that one should beware of its knowledge limits (pre-training) and working context. The former can lead to an inability to analyze novel situations, extreme outcomes, etc., e.g., it's no accident that physics-based models remain superior to AI when it comes to forecasting extreme events. The latter can impede the quality of the AI's output even as subsequent models have grown better at following instructions and understanding the user's goals/intent. 

Overall, AI can create stunning visualizations/infographics. I use it at work for just that purpose e.g., creating graphics for presentation slides. I still need to verify the output as every now and then, things need to be tweaked.  AI can also assess the strength and weakness of arguments and help identify "blind spots." I don't believe AI should be used to generate all arguments, much less as a substitute for human judgment. That's the case even as there are more advanced features that can also be applied, an increased capacity in AI coding (Claude Code, Codex, etc.) and sustained work that can be carried out through agentic AI.

Finally, these are my opinions. Others may well disagree with me. 

  • Like 2
Link to comment
Share on other sites

On 5/8/2026 at 5:10 PM, WolfStock1 said:

Wait - so we're basically having discussion with AI, with TCC as a proxy?

No thanks.

Can we perhaps start a separate "No AI" thread?

What are you talking about? Nobody is "having a discussion with AI"... I use it for assistance in creating an engaging headline, that's all. As Don pointed out, there is, in fact, evidence that it is better at that than a human. And I can say from my personal analytics, that this is certainly the case.

Link to comment
Share on other sites

On 5/11/2026 at 12:45 PM, TheClimateChanger said:

What are you talking about? Nobody is "having a discussion with AI"... I use it for assistance in creating an engaging headline, that's all. As Don pointed out, there is, in fact, evidence that it is better at that than a human. And I can say from my personal analytics, that this is certainly the case.

You said "I don't have time to write my own posts."    That sounds to me like the whole post, not just the headline.

Link to comment
Share on other sites

On 5/9/2026 at 8:12 PM, donsutherland1 said:

There is actually one paper that suggests that AI-generated headlines may garner greater attention.

https://www.mdpi.com/2673-5172/5/4/110

This concerns me. My worry is that sometimes the actual nuance is lost in the AI-generated content. As a result, even as the headlines might be catchy, those headlines could be somewhat inconsistent with the content. That creates a credibility gap (which already occurs with "clickbait" headlines). I am not aware of research on that aspect of AI usage.

I remain an advocate of AI, but suggest that one should beware of its knowledge limits (pre-training) and working context. The former can lead to an inability to analyze novel situations, extreme outcomes, etc., e.g., it's no accident that physics-based models remain superior to AI when it comes to forecasting extreme events. The latter can impede the quality of the AI's output even as subsequent models have grown better at following instructions and understanding the user's goals/intent. 

Overall, AI can create stunning visualizations/infographics. I use it at work for just that purpose e.g., creating graphics for presentation slides. I still need to verify the output as every now and then, things need to be tweaked.  AI can also assess the strength and weakness of arguments and help identify "blind spots." I don't believe AI should be used to generate all arguments, much less as a substitute for human judgment. That's the case even as there are more advanced features that can also be applied, an increased capacity in AI coding (Claude Code, Codex, etc.) and sustained work that can be carried out through agentic AI.

Finally, these are my opinions. Others may well disagree with me. 

Huge sample

Link to comment
Share on other sites

5 hours ago, FPizz said:

Huge sample

it's just one paper. A lot more research will be needed to strengthen understanding. The point is that one can't dismiss the idea that AI-generated headlines might be able to capture greater attention. That doesn't mean that the most skilled headline writers can't beat AI, at least for now. Moreover, as AI continues to improve, it will likely generate even more potent headlines, especially if it more effectively ties token prediction to psychological impact.

In any case, we're very early in the AI age. A lot will change in coming years. At this time, I don't think the doom-and-gloom scenarios are the most likely outcome, though I expect that there will be dislocations in some industries and occupations. I also suspect that there will remain large latitude for human agency and human judgment will remain crucial for the foreseeable future. 

Link to comment
Share on other sites

Mm...  fwiw, my own interaction/experience and impressions therefrom with AI tools are more favorable than that. 

It comes down to one's own responsibility to "asking the question in the right way" 

One aspect I will fault AI is that it sometimes will hooks it's teeth into a adjective/verb one has chosen to use - perhaps the user had the the 2nd or 3rd preferential definition in mind when they did...- as gospel.  It appears as tho the AI probabilistically leans on the first more proper usage?  speculation.  Either way, it doesn't offer suggestions from suspicion over what the user really meant - probably because it's not a human being in that sense.  AI isn't yet at the level of "what was it they were really thinking".  This can tint the context when it is subtle, at other times outright diverting conversations down paths users weren't really intending. 

Thing is, it was always because of the user's word choice.  

Cobalt says "...the vibes are off," and Dan mentions nuances ...  etc.   You know, those strike me as really being the state of the art of the technology not really getting the "spirit" of the moment along the exchange, in lieu of its tendency to run to the most concise meaning upon turn of phrases and/or word choice. I've gone back along the exchange history and found inflection points and said, "I didn't mean to imply x, I meant more y" etc... and after the brief pause, the AI admits to a course correction so to speak.  It can also help if you use the markdown option in the settings to color the type of experience you want.  Mine says no flattery. Don't be obsequious.  This actually helps...because if you use a word that's ...a little off, the AI will be less likely to just ignore/accept it - it might even ask me how what I just said relates.   

I could almost see a future where a type of new job req emerges in industries that have adopted/bought into AI called "AICE" employees - pronounced acer.   These are "AI Configuration Engineers" 

What do you do, "I'm an Acer" for x-y-z.   The job entails a fuller/intimate understanding of the tech/circumstance such that the engagement with the AI teases the best solution without those distractions.  Which believe it or not ... are hugely costly - even small ones and the deviation, the expenditure in recovery adds to the growing data center push-back concern over resource piggery; it is expensive for a lot of reasons.  

This could all just be generational, too.   It's important to bear in mind, this tech is like the Wright Brothers first 90 feet of successful flight ... well, proportionally, maybe a little farther along.  But we're no where close to flying high altitude international flight routes in that metaphor just yet.   There are advances, lots of them.  That ambit research is definitely not going to stop for better or worse!  Plus, imagine when Quantum Computing comes on line, a computing core that finds all possible definitions that can exist, at the same instant, and chooses the top probability - now... plug Gemini into that.  Hm?  

So far, by keeping tabs on my own concision when dealing with AI, I've come to find that it's been the most advantageous accompaniment to both problem solving, and the creative process, since either the invention of the scientific calculator or biology's ability to dream.  Impetus on a accompanying helper - we're a ways yet from landing the ability to soup to nuts solutions in isolation.

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...