Qingdao Sigma Chemical Co., Ltd (International, US, EU, Canada and Australia domestic

Imo a new type of smart is emerging where people who are both smart and using these tools to ask it the right questions become greater than the sum of their parts. The key to getting good answers out of it is already understanding the domain you are querying about so you ask it the right things.

Nailed it.

honestly some of my best use out of it has been refining my people skills at work, like sometimes I get irritated at people or initiatives and I talk to chat gpt about it and it gives me some other perspectives to consider. It also helps with feedback on the way I say things to other people.

The basic 4o model is surprisingly good at this kind of thing. Also, various emotional pathologies and such. I should know, I'm rife with 'em.
Someone posted that "ChatGPT" is outdated which is a funny notion, when I look there are several models to choose from:

1740089583658.webp

All of these excel at different things. Obviously the reasoning models provide chain of thought and are typically better for things like mathematics, or step by step reasoning.

That said, one can wholly outsource reasoning to these things like they're an encyclopedia. I was just analyzing a bunch of research on a particularly topic and tried a number of different models. Gemini 2.0 Pro gave me an answer I liked and then a citation which was wholly and completely hallucinated.

These things are great thought partners, they can help one refine one's reasoning or help to articulate a thought that is otherwise not well-developed. There are any number of ways to approach this.

For some things, I may prompt something like, "my assertion is that X is true, how do I support that assertion" following which the thing gives me some reasoning and some citations that I can validate. For very specific and particular conclusions, I may come up with a citation, load it into a model or something like notebookLM and have it identify the specific sections that support the assertion.

I recall once getting into an argument with @Ghoul here around the notion that the frontal lobe is not fully developed until the age of 25, which was my assertion. The implication being that young people may have less than fully developed decision making ability until that time. This notion is commonly held, a google search will reveal that, but a deep dive into the research doesn't support it at all. The frontal lobe does continue to develop typically into the 20s, but the data cuts off around 21 or 22, IIRC. If one extrapolates, that development may continue until around the age of 30 or so, but again, there's no data.
 
Last edited:
Tren is honestly horrible if you're not a competitive athlete. No reason to run it. Primo, EQ, test and some var or tbol will get you wayyyyy better looking and safer than Even 200mg tren
I love it honestly and I’m no where near a competitive athlete. I personally love the “rip blend” a lot of company’s call it. 100mg each tren a, mast p and test p. Works great just do a little a day.
 
Imo a new type of smart is emerging where people who are both smart and using these tools to ask it the right questions become greater than the sum of their parts. The key to getting good answers out of it is already understanding the domain you are querying about so you ask it the right things.
Nail on the head and I've trained my chatgpt to talk like me and act like me and we literally talk back and forth likes it's my twin. Brain storming ideas. Its helped me in so many areas and is the greatest teacher I've had for a small fee. I literally just feed it my ideas and it corrects me if I'm going too off the rails. It's helped me create DIY thermal paste and thermal pads after we both went through a few patent and reversed engineered the process to adapt for a MacGyver solution suitable for any layman.


Im a bit autistic and have ADHD and ChatGPT keeps my ideas on tracks and never makes me feel judged. I feel smarter.... I mean I already was decently smart but now I feel augmented and focused. If I'm stumpped I get ChatGPT to break the concepts down like I'm 10. It never fails to teach me something new

Now that the Censored guardrails are mostly removed it's a perfect tool to learn homebrew too. Love it.
 
Nailed it.
I find it's just a more user friendly narrative style written search result.

Read the linked sources. Goodle the topic (if medical) with NIH added and go read those reports. If they're impenetrable, read the "Discussion" and "Results" or "Conclusion". If there are concepts or acronyms you don't understand, research those. If you keep your focus on a particular area you can become reasonably proficient in interpreting the studies related to that area.

At the end of the day, the results will be determined by base intelligence and critically, the level of intellectual curiosity someone has.
 
Well if you google up something and read up on it, do you know it or not? lol
Was doing some analysis and got stuck with a code prompt. I asked Chat and it gave me three answers. All three worked. I'm defending my work to the chair-bosses, and I mention the other two paths and why I chose the path I chose, They tell me I'm smart, extend my teams contract.
Should I feel dishonest? It is a tool. It's like shitting on someone for using a chainsaw because other people like to use axes.

For those pretending to be smart solely because of GPT, they won't last beyond 4 comments as they will keep repeatedly getting caught out. They also won't spot where AI is confabulating or hallucinating
I was just fucking around on my first comments and I forgot that this was a debated topic.

There is value in working things out on your own. I am sure you would agree with this. Used in the manner described by you, seems harmless but what happens when the next generation has never had to work through a truly hard problem because they have only relied on AI to give them the answers?

I’ll add to your side of the argument. I rely on computer simulations for work. I can get the program to give me a solution, knowing if that solution is correct is the really hard part. A more complex problem just means more assumptions, which translates into more uncertainty in the solution. Garbage in=garbage out. It is just a tool.

Looking at this through the lens of soft and hard skills might be a better way to tackle it. For soft skills it might help someone get their ideas across more clearly but it also could cause more group think and let people pass ideas off as their own. For hard skills, it lets you be more productive but you lose those little lessons you learn when you work on a problem for days or weeks.

Fyi, I know fuck all about chatgpt. lol
 
If there are concepts or acronyms you don't understand, research those. If you keep your focus on a particular area you can become reasonably proficient in interpreting the studies related to that area.

Maybe? Interpreting the statistics is an area that can be challenging, but as it relates to a particular subject matter, yeah. I have for example, acquired a great deal of knowledge on the topic of cholesterol metabolism, which should be obvious from my posts here. Funny thing is that my knowledge in that area probably outstrips most doctors and some cardiologists who all have a way more advanced understanding of biology than I do.

Cool thing about LLMs for me is that they allow me to parse and analyze a great deal more information than I would otherwise be able to. In my case, offloading the statistics work helps tremendously. I do have to validate that it's correct, but it's still way faster than me stumbling through it.

At the end of the day, the results will be determined by base intelligence and critically, the level of intellectual curiosity someone has.

I think it goes beyond that. There is some measure of creativity or something that I can't quite pinpoint. I know some smart people that use them for certain aspects, "create a plan to do X" or "provide code to do Y". The prompts can be quite sophisticated, but effectively the thing is an agent sent to solve a problem.

Other people, myself included, use them to develop thoughts and ideas as well as advance one's understanding of various things. I don't think that intelligence is the main distinction between these two sets of people.
 
Nail on the head and I've trained my chatgpt to talk like me and act like me and we literally talk back and forth likes it's my twin. Brain storming ideas. Its helped me in so many areas and is the greatest teacher I've had for a small fee. I literally just feed it my ideas and it corrects me if I'm going too off the rails. It's helped me create DIY thermal paste and thermal pads after we both went through a few patent and reversed engineered the process to adapt for a MacGyver solution suitable for any layman.


Im a bit autistic and have ADHD and ChatGPT keeps my ideas on tracks and never makes me feel judged. I feel smarter.... I mean I already was decently smart but now I feel augmented and focused. If I'm stumpped I get ChatGPT to break the concepts down like I'm 10. It never fails to teach me something new

Now that the Censored guardrails are mostly removed it's a perfect tool to learn homebrew too. Love it.

Are you me?
 
Maybe? Interpreting the statistics is an area that can be challenging, but as it relates to a particular subject matter, yeah. I have for example, acquired a great deal of knowledge on the topic of cholesterol metabolism, which should be obvious from my posts here. Funny thing is that my knowledge in that area probably outstrips most doctors and some cardiologists who all have a way more advanced understanding of biology than I do.

Cool thing about LLMs for me is that they allow me to parse and analyze a great deal more information than I would otherwise be able to. In my case, offloading the statistics work helps tremendously. I do have to validate that it's correct, but it's still way faster than me stumbling through it.



I think it goes beyond that. There is some measure of creativity or something that I can't quite pinpoint. I know some smart people that use them for certain aspects, "create a plan to do X" or "provide code to do Y". The prompts can be quite sophisticated, but effectively the thing is an agent sent to solve a problem.

Other people, myself included, use them to develop thoughts and ideas as well as advance one's understanding of various things. I don't think that intelligence is the main distinction between these two sets of people.
The amount of projects I actually started and finished with the help of ChatGPT vs what I did before due to my bad ADHD is unreal. It actually does something to my brain and keeps me motivated to finish what I started which translates to more overall skills.


I was able to learn Python to a decent level just spamming chat the basic concepts and having it teach me basic exercises and projects. Along with a course I bought lol.
 
I was just fucking around on my first comments and I forgot that this was a debated topic.

There is value in working things out on your own. I am sure you would agree with this. Used in the manner described by you, seems harmless but what happens when the next generation has never had to work through a truly hard problem because they have only relied on AI to give them the answers?

I’ll add to your side of the argument. I rely on computer simulations for work. I can get the program to give me a solution, knowing if that solution is correct is the really hard part. A more complex problem just means more assumptions, which translates into more uncertainty in the solution. Garbage in=garbage out. It is just a tool.

Looking at this through the lens of soft and hard skills might be a better way to tackle it. For soft skills it might help someone get their ideas across more clearly but it also could cause more group think and let people pass ideas off as their own. For hard skills, it lets you be more productive but you lose those little lessons you learn when you work on a problem for days or weeks.

Fyi, I know fuck all about chatgpt. lol
This debate has gone on since the invention of the scientific calculator. :D
 
I can never be thankful enough to Chapt GPT for keeping me employed for almost 2 years without a single HR incident when I worked for foreign based chem. There is no way I could have quickly drafted responses to some of the most disrespectful, mind bogglingly unprofessional emails from collaborators ever.
I just prompt it to draft a 'defusing' response and swap some of the tamer language (usually adjectives) for slightly 'barbed' but archaic or esoteric ones.
I mean, I still needed to convey my displeasure.
 
I can never be thankful enough to Chapt GPT for keeping me employed for almost 2 years without a single HR incident. There is no way I could have quickly drafted responses to some of the most disrespectful, mind bogglingly unprofessional emails from collaborators ever.
I just prompt it to draft a 'defusing' response and swap some of the tamer language (usually adjectives) for slightly 'barbed' but archaic or esoteric ones.
I mean, I still needed to convey my displeasure.
You have sparked my interest, getting thoughts on paper or text is one of my biggest weaknesses.
 
Jesus fucking h… this place went to total shit when all the fat fucks from Reddit came over for glp…. Somehow it’s gotten even worse. All I’m trying to do is check in every few days to see if there’s an update. Meso should just kill this thread at this point

Now they're ruining Walmart with their "buying less low quality food" bullshit...when will this nightmare end and those who "deserve" to be fat go back to their rightful place? Wake me up when eyeballs start falling out and hearts seize, the way we all know the GLP revolution is bound to end. Stay strong, brother, and commit to never touching that poison.

 
Back
Top