Qingdao Sigma Chemical Co., Ltd (International, US, EU, Canada and Australia domestic

Do you get splotchy when you’re coming off, like a spray tan? And I know some people report an increase in new mole activity….have you noticed any of that?
No splotchy spots at all, just a nice even tan.
No new moles either, but I’ve heard some people can get them.

Definitely use sunblock or moisturizer with minimum SPF 15 on your face if you’ll spend any time outside or your face will get very dark.

Also, it does make my dick and balls get SUPER dark to the point where previous girlfriends and my wife have made comments and jokes about me having a BBC (99.9% European according to DNA tests).
 
No experience with MT2, but pinning MT1 at 1mg/week for maintenance really helps with sides of PT141 - I guess they are close enough for the body to acclimatize to both. MT1 and PT141 are FDA approved and considered safe, unlike MT2.
 
Yea your right i overlooked his UK comment. He probably wont find much cheaper than what hes already found unless someone from the uk can chime in with their sources. I used to order from med supply sites but discord can be a lot cheaper. Im ashamed to say I used to use a *cough* certain supplier's bac water starting out. (Tracy I love you) but the vials of water I have seen....yikes
Medexsupply if you’re in the US no license required even tho it says it’s required. 25 pack of Hospira for less than $100
 
Do you get splotchy when you’re coming off, like a spray tan? And I know some people report an increase in new mole activity….have you noticed any of that?
A lot of dudes here swear by MT-2. I always have to be contrarian and say I will never touch it again.

It actually worked waaaay too well for me...minimal nausea. I had people commenting on my tan within 3 days and they were convinced I had gone on vacation.

After 2 weeks, every single mole I had was significantly darker and noticeable. It made me look older AND my wife was convinced I had skin cancer (yeah, she knows I pin various things but she rarely has a clue what I'm using even if she stares at it in the fridge).

I stopped the MT-2...things took about 2 months but moles did mostly revert back to what they looked like before. I don't think it does anything to hide age spots - really just seems to highlight them even more in my experience. (Keep in mind I got almost no sun while using this so its not like I was actively trying to tan).

If I was gonna try anything again, it would be MT-1... Much milder effects and interested in it's anti-inflammatory effects.

I suspect even PT-141 may have some mild tanning benefits but MUCH milder.... I would need to review the receptor targets again.
 
One thing people should know about ChatGPT and other LLMs is that they cannot count. Unless it's calling Python or similar for math, you are better off asking for a formula and doing all the math yourself.

This is outdated information. There are models that do math just fine. Most recently GPT 4o1 and the various o3 models. Deepseek R1, and some of the recently released Gemini 2.0 models do math just fine.

When I write, "just fine" I mean they exceed human capabilities in most cases. A friend of mine is dealing with a renovation project and his engineer produced a beam design with a steel flitch plate that was overkill for the application. Beam and tributary load calculations are something I'm quite familiar with, but they get complicated with flitch plates and I am not an engineer by trade, nor do I have access to the modeling software commonly used for such things.

In any case, I used both GPT-4o1 and Deepseek R1 to produce a beam design. Both worked, though the former was better. I spot checked the calculations for accuracy. Given that GPT-4o1 yielded a better design (both were correct, one was easier to build), I iterated on it and then had the model produce formulas that I could use to validate the correctness of the calculation independently. I used Deepseek R1 to do some of that validation and hand-calculation for others.
 
This is outdated information. There are models that do math just fine. Most recently GPT 4o1 and the various o3 models. Deepseek R1, and some of the recently released Gemini 2.0 models do math just fine.

When I write, "just fine" I mean they exceed human capabilities in most cases. A friend of mine is dealing with a renovation project and his engineer produced a beam design with a steel flitch plate that was overkill for the application. Beam and tributary load calculations are something I'm quite familiar with, but they get complicated with flitch plates and I am not an engineer by trade, nor do I have access to the modeling software commonly used for such things.

In any case, I used both GPT-4o1 and Deepseek R1 to produce a beam design. Both worked, though the former was better. I spot checked the calculations for accuracy. Given that GPT-4o1 yielded a better design (both were correct, one was easier to build), I iterated on it and then had the model produce formulas that I could use to validate the correctness of the calculation independently. I used Deepseek R1 to do some of that validation and hand-calculation for others.
These models are advancing so fast, I saw one company doing automated building inspections from drone + robot imagery and chat gpt analysis of imagery. Bypasses the whole traditional computer vision stuff.
 
This is outdated information. There are models that do math just fine. Most recently GPT 4o1 and the various o3 models. Deepseek R1, and some of the recently released Gemini 2.0 models do math just fine.

When I write, "just fine" I mean they exceed human capabilities in most cases. A friend of mine is dealing with a renovation project and his engineer produced a beam design with a steel flitch plate that was overkill for the application. Beam and tributary load calculations are something I'm quite familiar with, but they get complicated with flitch plates and I am not an engineer by trade, nor do I have access to the modeling software commonly used for such things.

In any case, I used both GPT-4o1 and Deepseek R1 to produce a beam design. Both worked, though the former was better. I spot checked the calculations for accuracy. Given that GPT-4o1 yielded a better design (both were correct, one was easier to build), I iterated on it and then had the model produce formulas that I could use to validate the correctness of the calculation independently. I used Deepseek R1 to do some of that validation and hand-calculation for others.
I would not trust beam calculations from these models without double checking myself. Transformer - based LLMs can't count. Think of them as very smart autocorrect with next word prediction. They can 'guess' the answer, but they can't count. It's easy to spot a pure llm based chat bot by asking the answer for something like 1+1+1+1+1-1+1+1. The best these models can do (if we are talking OpenAI models) is call a plugin to do math elsewhere - they usually create a formula and send it to a math plugin or Python interpreter.

Just ask the model to calculate daily Test P amounts transitioning from Test C/E twice a week, taking cumulative blood Test levels into account to maintain them at constant average level. If it can't do something as simple as half life equation for two compounds you can imagine what it can do for more complex engineering problems.
 
would not trust beam calculations from these models without double checking myself. Transformer - based LLMs can't count. Think of them as very smart autocorrect with next word prediction. They can 'guess' the answer, but they can't count. It's easy to spot a pure llm based chat bot by asking the answer for something like 1+1+1+1+1-1+1+1. The best these models can do (if we are talking OpenAI models) is call a plugin to do math elsewhere - they usually create a formula and send it to a math plugin or Python interpreter.

I appreciate your effort to explain, but I can assure you that I am intimately familiar with the various models and their behavior. I would encourage you to revisit your conclusions, specifically with regard to "reasoning" models like 4o1 or Deepseek r1. Test it yourself and see what you think.

As for trusting beam calcs, it was an effort to quickly evaluate the engineer's work. If I were producing something to be built, I would certainly validate it either by hand or with the modeling software commonly used. The benefit of using the LLM was that it allowed me to iterate quickly and test various solutions, which is where I think the engineer went awry.

Local building code required a particular safety factor. In this case, the beam supported roof rafters which might see a snow load, so there was static load as well as a substantial live load. The engineer than added 15% safety factor by under-rating the materials, both the LVLs and the flitch plate. Then he produced a design based on a guess, I presume, and then went with it. The final result exceeded the deflection requirement by almost 2x. My recollection is that code required deflection was L/240 (length of the beam divided by 240) and the engineer's design with all of the accumulated safety factors was something like L/500 or L/400. L360 would be nice on a floor that someone was going to walk on, but this was for a roof. I'm guessing there was a sort of sunk cost fallacy at play there. He pulled a design out of his ass which exceeded the requirement, so he called it good.

Anyway, while I wouldn't trust it, per se, I was pretty impressed with its ability to produce a solution rapidly.
 
These models are advancing so fast, I saw one company doing automated building inspections from drone + robot imagery and chat gpt analysis of imagery. Bypasses the whole traditional computer vision stuff.

It's neat, but we've only just begun to see interesting applications. Generally speaking enterprises are buying chatbots, pointing them at their internal documentation, proclaiming loudly that they are doing AI and are surprised that magic doesn't happen.

The very neat thing about Deepseek R1 isn't so much the efficiency gains, but, which were all the product of well understood advancements in the field. It represented proof of what could be done right now in terms of training cost, which is to be expected.

What I find especially interesting is the facility that it includes for reinforcement learning. That is, there is an API for defining a reward functions for fine tuning post training. Prior to Deepseek, this was all done in the training process. Being open source, anyone can take the model, develop a RL framework and fine tune it for specific applications, like the automation of building inspection or whatever else you like.
 
Last edited:
Back
Top