

Ohh, that would be all the developers at jobs with daily AI use quotas running scheduled scripts to make it look like they are using AI so they don’t lose their jobs.


Ohh, that would be all the developers at jobs with daily AI use quotas running scheduled scripts to make it look like they are using AI so they don’t lose their jobs.


He’s whining about having to pay a tiny fraction of a fraction of his wealth to support “the poors”.


That just moves the “problem” closer to them.
Why is the difference only extremely pronounced in the northern hemisphere? If I understand the math behind the projection correctly, the equator should be true scale, and things should vary more the further north AND south you go.
This image shows the extreme southern latitudes to be almost equal to their true area. Is the image wrong, or am I misunderstanding something about the projection?


Someone’s gotta record the fascism.


Why are you so focused on just the training?
Because I work with LLMs daily. I understand how they work. No matter how much I type at an LLM, its behavior will never fundamentally change without regenerating the model. It never learns anything from the content of the context.
The model is the LLM. The context is the document of a word processor.
A Jr developer will actually learn and grow in to a Sr developer and will retain that knowledge as they move from job to job. That is fundamentally different from how an LLM works.
I’m not anti-AI. I’m not “crying” about their issues. I’m just discussing the from a practical standpoint.
LLMs do not learn.


You do understand that the model weights and the context are not the same thing right? They operate completely differently and have different purposes.
Trying to change the model’s behavior using instructions in the context is going to fail. That’s like trying to change how a word processor works by typing in to the document. Sure, you can kind of get the formatting you want if you manhandle the data, but you haven’t changed how the application works.


What part about how LLMs actually work do you not understand?
“Customizing” is just dumping more data in to it’s context. You can’t actually change the root behavior of an LLM without rebuilding it’s model.


Unless you are retraining the model locally at your 23 acre data center in your garage after every interaction, it’s still not learning anything. You are just dumping more data in to its temporary context.


… That keeps making the same mistakes over and over again because it never actually learns from what you try to teach it.
Do you know what we call someone that doesn’t write a single line of code?
Anything other than a “developer”.