Qingdao Sigma Chemical Co., Ltd (International, US, EU, Canada and Australia domestic

This is outdated information. There are models that do math just fine. Most recently GPT 4o1 and the various o3 models. Deepseek R1, and some of the recently released Gemini 2.0 models do math just fine.

When I write, "just fine" I mean they exceed human capabilities in most cases. A friend of mine is dealing with a renovation project and his engineer produced a beam design with a steel flitch plate that was overkill for the application. Beam and tributary load calculations are something I'm quite familiar with, but they get complicated with flitch plates and I am not an engineer by trade, nor do I have access to the modeling software commonly used for such things.

In any case, I used both GPT-4o1 and Deepseek R1 to produce a beam design. Both worked, though the former was better. I spot checked the calculations for accuracy. Given that GPT-4o1 yielded a better design (both were correct, one was easier to build), I iterated on it and then had the model produce formulas that I could use to validate the correctness of the calculation independently. I used Deepseek R1 to do some of that validation and hand-calculation for others.
I would not trust beam calculations from these models without double checking myself. Transformer - based LLMs can't count. Think of them as very smart autocorrect with next word prediction. They can 'guess' the answer, but they can't count. It's easy to spot a pure llm based chat bot by asking the answer for something like 1+1+1+1+1-1+1+1. The best these models can do (if we are talking OpenAI models) is call a plugin to do math elsewhere - they usually create a formula and send it to a math plugin or Python interpreter.

Just ask the model to calculate daily Test P amounts transitioning from Test C/E twice a week, taking cumulative blood Test levels into account to maintain them at constant average level. If it can't do something as simple as half life equation for two compounds you can imagine what it can do for more complex engineering problems.
 
would not trust beam calculations from these models without double checking myself. Transformer - based LLMs can't count. Think of them as very smart autocorrect with next word prediction. They can 'guess' the answer, but they can't count. It's easy to spot a pure llm based chat bot by asking the answer for something like 1+1+1+1+1-1+1+1. The best these models can do (if we are talking OpenAI models) is call a plugin to do math elsewhere - they usually create a formula and send it to a math plugin or Python interpreter.

I appreciate your effort to explain, but I can assure you that I am intimately familiar with the various models and their behavior. I would encourage you to revisit your conclusions, specifically with regard to "reasoning" models like 4o1 or Deepseek r1. Test it yourself and see what you think.

As for trusting beam calcs, it was an effort to quickly evaluate the engineer's work. If I were producing something to be built, I would certainly validate it either by hand or with the modeling software commonly used. The benefit of using the LLM was that it allowed me to iterate quickly and test various solutions, which is where I think the engineer went awry.

Local building code required a particular safety factor. In this case, the beam supported roof rafters which might see a snow load, so there was static load as well as a substantial live load. The engineer than added 15% safety factor by under-rating the materials, both the LVLs and the flitch plate. Then he produced a design based on a guess, I presume, and then went with it. The final result exceeded the deflection requirement by almost 2x. My recollection is that code required deflection was L/240 (length of the beam divided by 240) and the engineer's design with all of the accumulated safety factors was something like L/500 or L/400. L360 would be nice on a floor that someone was going to walk on, but this was for a roof. I'm guessing there was a sort of sunk cost fallacy at play there. He pulled a design out of his ass which exceeded the requirement, so he called it good.

Anyway, while I wouldn't trust it, per se, I was pretty impressed with its ability to produce a solution rapidly.
 
These models are advancing so fast, I saw one company doing automated building inspections from drone + robot imagery and chat gpt analysis of imagery. Bypasses the whole traditional computer vision stuff.

It's neat, but we've only just begun to see interesting applications. Generally speaking enterprises are buying chatbots, pointing them at their internal documentation, proclaiming loudly that they are doing AI and are surprised that magic doesn't happen.

The very neat thing about Deepseek R1 isn't so much the efficiency gains, but, which were all the product of well understood advancements in the field. It represented proof of what could be done right now in terms of training cost, which is to be expected.

What I find especially interesting is the facility that it includes for reinforcement learning. That is, there is an API for defining a reward functions for fine tuning post training. Prior to Deepseek, this was all done in the training process. Being open source, anyone can take the model, develop a RL framework and fine tune it for specific applications, like the automation of building inspection or whatever else you like.
 
Last edited:
I appreciate your effort to explain, but I can assure you that I am intimately familiar with the various models and their behavior. I would encourage you to revisit your conclusions, specifically with regard to "reasoning" models like 4o1 or Deepseek r1. Test it yourself and see what you think.
I tried. And here is the result. It matched my expectations.
 

Attachments

  • Screenshot_20250215-034805.webp
    Screenshot_20250215-034805.webp
    106.9 KB · Views: 84
A lot of dudes here swear by MT-2. I always have to be contrarian and say I will never touch it again.

It actually worked waaaay too well for me...minimal nausea. I had people commenting on my tan within 3 days and they were convinced I had gone on vacation.

After 2 weeks, every single mole I had was significantly darker and noticeable. It made me look older AND my wife was convinced I had skin cancer (yeah, she knows I pin various things but she rarely has a clue what I'm using even if she stares at it in the fridge).

I stopped the MT-2...things took about 2 months but moles did mostly revert back to what they looked like before. I don't think it does anything to hide age spots - really just seems to highlight them even more in my experience. (Keep in mind I got almost no sun while using this so its not like I was actively trying to tan).

If I was gonna try anything again, it would be MT-1... Much milder effects and interested in it's anti-inflammatory effects.

I suspect even PT-141 may have some mild tanning benefits but MUCH milder.... I would need to review the receptor targets again.
MT-1 also darkened my moles significantly. You really have to combine these with UV radiation (whether it be the sun or short-duration and low intensity tanning bed). Otherwise the darkening of moles will be more noticeable than the darkening of the regular skin.
 
A lot of dudes here swear by MT-2. I always have to be contrarian and say I will never touch it again.

It actually worked waaaay too well for me...minimal nausea. I had people commenting on my tan within 3 days and they were convinced I had gone on vacation.

After 2 weeks, every single mole I had was significantly darker and noticeable. It made me look older AND my wife was convinced I had skin cancer (yeah, she knows I pin various things but she rarely has a clue what I'm using even if she stares at it in the fridge).

I stopped the MT-2...things took about 2 months but moles did mostly revert back to what they looked like before. I don't think it does anything to hide age spots - really just seems to highlight them even more in my experience. (Keep in mind I got almost no sun while using this so its not like I was actively trying to tan).

If I was gonna try anything again, it would be MT-1... Much milder effects and interested in it's anti-inflammatory effects.

I suspect even PT-141 may have some mild tanning benefits but MUCH milder.... I would need to review the receptor targets again.
Did you notice any effect on scars?
 
all of the deca I have ever received from qsc has been clear. The last batch I got from them in the sale was yellowish. Not as yellow as tren, but definitely yellow tinited.

Is this normal?
 
A lot of dudes here swear by MT-2. I always have to be contrarian and say I will never touch it again.

It actually worked waaaay too well for me...minimal nausea. I had people commenting on my tan within 3 days and they were convinced I had gone on vacation.

After 2 weeks, every single mole I had was significantly darker and noticeable. It made me look older AND my wife was convinced I had skin cancer (yeah, she knows I pin various things but she rarely has a clue what I'm using even if she stares at it in the fridge).

I stopped the MT-2...things took about 2 months but moles did mostly revert back to what they looked like before. I don't think it does anything to hide age spots - really just seems to highlight them even more in my experience. (Keep in mind I got almost no sun while using this so its not like I was actively trying to tan).

If I was gonna try anything again, it would be MT-1... Much milder effects and interested in it's anti-inflammatory effects.

I suspect even PT-141 may have some mild tanning benefits but MUCH milder.... I would need to review the receptor targets again.

I kind of wish I had never used it myself. I (ab)used it well over ten years at 500mcg a day. It worked extremely well for me, too. I stopped both MT2 and tanning last year, and a lot has faded but left me with uneven skin tones all over. I am also polka-dotted with freckles and smudges. However, the main reason I kept using it was it helped tamp appetite for me at night... but now we have GLP's. I didn't know at the time, but it also really improved libido and EQ... I do miss those effects.
 

Thou shall not misquote a member. Never ends well.

Just don't do it.

Wrong
 
all of the deca I have ever received from qsc has been clear. The last batch I got from them in the sale was yellowish. Not as yellow as tren, but definitely yellow tinited.

Is this normal?
I got different oils from QSC and from everything i had different batches. Sometimes the color was slightly different between batches. All was legit.
 
Welcome back. Was just going off your explanation. Sorry if I misunderstood.

 
You banned bruh?

Hi Mongo, how are you?

Thanks for writing and sorry I only see this, now.

You were given the wrong info, I am afraid, the ban was not for "misquoting", lol.

After being repeatedly called a "tranny" by a deranged member, I finally asked whether he wanted a dick pic and told him I was going to send him one.
He proceeded to tag the owner in a series of messages, about homosexual
goings-on on Meso being unacceptable and calling for me to be banned.
And that's what happened (he got banned too).
You can see his and the message I was banned for here, but all subsequent posts have been deleted (fair enough).

I didn't think it was that bad, maybe it was more about my offer being an empty promise, lol.

Post in thread 'STOCK UP: Don't say you weren't warned! (US)' STOCK UP: Don't say you weren't warned! (US)

Post in thread 'STOCK UP: Don't say you weren't warned! (US)' STOCK UP: Don't say you weren't warned! (US)


As for the alleged "misquote" (again, not a reason for banning, when they ban you they say why), all I did was to remove a bit of swearing from the beginning of a post, a sentence and a bit long, which did not change the meaning of it, at all.
As that was pointed out by OP, though, I immediately rectified reposting the sentence in its entirety.
To be fair, I should have left it OG.
It was much better and more incisive, lol.

Anyway, here are the links to those posts of mine (#9701, #9715).
However, if you are so inclined, start reading from message #9639, to understand what the whole thing was about and others had written, beforehand.

Thanks for getting in touch.
It's always good seeing you.
Have a good weekend
:)

Post in thread 'Cat Café EU & US domestic' Cat Café EU & US domestic


Post in thread 'Cat Café EU & US domestic' Cat Café EU & US domestic
 
Welcome back. Was just going off your explanation. Sorry if I misunderstood.

No worries.
Thank you.
Hope you are well and wish you a good weekend.
 
Back
Top