…and I still don’t get it. I paid for a month of Pro to try it out, and it is consistently and confidently producing subtly broken junk. I had tried doing this before in the past, but gave up because it didn’t work well. I thought that maybe this time it would be far along enough to be useful.
The task was relatively simple, and it involved doing some 3d math. The solutions it generated were almost write every time, but critically broken in subtle ways, and any attempt to fix the problems would either introduce new bugs, or regress with old bugs.
I spent nearly the whole day yesterday going back and forth with it, and felt like I was in a mental fog. It wasn’t until I had a full night’s sleep and reviewed the chat log this morning until I realized how much I was going in circles. I tried prompting a bit more today, but stopped when it kept doing the same crap.
The worst part of this is that, through out all of this, Claude was confidently responding. When I said there was a bug, it would “fix” the bug, and provide a confident explanation of what was wrong… Except it was clearly bullshit because it didn’t work.
I still want to keep an open mind. Is anyone having success with these tools? Is there a special way to prompt it? Would I get better results during certain hours of the day?
For reference, I used Opus 4.6 Extended.
I rarely use LLM’s for generating code. Usually, by the time I’ve provided all the necessary context, I might as well have just written the code myself. I do use LLM’s for doing research. As long as it’s understood that the response is only as accurate as the source material, they often do a decent job of distilling down to what I’m actually looking for.
Vibe coding, in the sense of telling the model to make codebase changes, then directly using the output produced, is 100% marketing bullshit that does not scale beyond toy examples.
Here’s the rub: Claude is extremely useful as an advanced autocomplete, if and only if you’re guiding it architecturally through every task it runs, and you vet + revise the output yourself between iterations. You cannot effectively pilot entirely from chat in a mature codebase, and you must compile robust documentation and instructions for Claude to know how to work with your codebase.
You also must aggressively manage information in the context window yourself and keep it clean. You mentioned going in circles trying to get the robot to correct itself: huge mistake. Rewind to before the error, and give it better instructions to steer it away from the pitfall it fell into. Same vein, you also need to reset ASAP after pushing into the >100k token mark, because the models start melting into putty soon after (yes, even the “extended” 1M-window ones).
I’m someone who has massively benefited from using modern LLMs in my work, but I’m also a massive hater at the same time: They’re just a tool, not magic, and have to be used with great care and attention to get reasonable results. You absolutely cannot delegate your thinking to them, because it will bite you, hard and fast.
For your use case (3D math), what I recommend is decomposing your end goal into a series of pure functions that you’ll string together. Once you have that list, that’s where Claude comes in. Have it stub those functions for you, then have it implement them one at a time, reviewing the output of every one before proceeding.
My preferred way of using LLM coders is:
- plan only
- read the spec file I just wrote
- optionally ask me questions in ‘qa.md’, I’ll reply inline Repeat until it stops asking me questions, then switch to a different model and ask again. I usually use both gpt5.3-codex AND Claude Sonnet
Then I have it update the spec. I start a new session to have it implement. Finally review the code. If I don’t like it, undo and revisit the spec. Usually it’s because I’m trying to do too much at once. And I need to break it down into multiple specs.
Adversarial reviews are also great ways to prune bad ideas and assumptions from plans. Have helped me out greatly and made the better LLMs often go “plan said do X, but doing that is a unknown huge risk that may take longer then the rest of the plan”.
The superpowers plugin does the brainstorm, qa, design plan, implementation plan, implement, review quite well. It should aid the process of actually doing feature type work. I also add adversarial reviews into the process, saves a lot of time debugging what went wrong after implementation.
This is the most pragmatic take I’ve read and it resonates strongly with my own experience. Claude can be a very useful tool, but like any other there is a learning curve and often many sharp edges. I’ve had Claude build some reasonably complex code bases, but it takes work. Its pretty decent at “coding” but pretty terrible at the rest of software engineering.
I think it’s pretty heavily dependent on what you’re trying to do. I’ve gotten a lot of push from higher ups at my company to use copilot wherever possible. So, I’ve spent a lot of time lately having copilot + opus write code for me. Most of what I’m doing is super straightforward middleware APIs or basic internal front ends. Since it has access to very similar codebases for reference, and we have custom agents that point it in the right direction, it’s a pretty good experience.
However, if I ask it to do something totally new, it does okay, more like what you’ve experienced. It takes a lot of hand holding, but it usually gets the job done as long as you’re very descriptive in your prompt. Probably not faster than an experienced developer at the moment though
“Almost but not quite” is exactly my experience with Claude.
The only time I’ve had real success is telling it to do a simple API change that touches a dozen files. It took a while and I’m not sure it was faster than doing it manually, but at least it was less boring.
Possibly important context: I only started really using it a few weeks ago.
I’ve had an opposite experience. Here are some guidelines I follow:
- Setup a foundation of rules and knowledge for Claude to fall back on. I define expectations, common definitions, behaviors and anything else that’s not project specific upfront.
- in Claude.md I reference different domains of behavior, definitions, and rules (Claude has conventions for storing this type of stuff, so ask it to handle organizing information too)
- create a top-level project definition: this defines what “knowledge” is. It allows you to build up what Claude knows later on as you work on your project. “Update knowledge”, “add this to your knowledge”, etc
- create a top-level rule: all information in knowledge must have one source of truth. Whenever needed reference the original knowledge source instead of duplicating it. Now you can ask it to “review your knowledge”, “audit and flag knowledge”
- explicitly explain everything, leave nothing ambiguous; explain like you’re explaining the problem to a new developer who’s not familiar with the plan or codebase at all. Don’t ask it to write code right away. Ask it to write a plan/spec. Review the plan, make changes and discuss it until the plan is 100%. This plan can include implementation details if you’re ok with that, but it’s not necessary (sometimes I write a separate referenced file called implementation.md beside the plan and have the plan reference it.).
- Your role as a developer is shifting from writing code, to writing specs, and reviewing code
- Once there is nothing left to describe, and no ambiguity in your plan, have it use your plan to write the code. This works amazingly well for me.
A benefit to this method is that there is less wasted effort on my part. If Claude writes the code wrong, I can trace the reason for the mistake to a gap in the plan. I can then update the plan, throw away the code (if I have to), and have Claude reimplement the code again.
Rinse and Repeat.
Keep knowledge, plans, and implementation details clearly separated (you can copy your latest successful knowledge files to new projects to get started on future projects even faster).
Keep the goals of each plan as small and granular as possible (easier the define plans). Knowledge, plans, and implementation details all get tracked in your repository just like your code does.
I’m a career developer, and have been writing code for over 20 years. I’m adding this bit because I understand how AI driven development can look like a threat to developers. Over this last year, I’ve had a shift in this thinking though. I can take what I’ve learned through my career and use it to inform writing successful specifications Claude can use to write effective code. Claude may not solve all of our coding problems, but if used effectively, it solves nearly everything you throw at it.
hey shaggy. I want to touch on your last point as a newer developer:
My department is finally seeing 10x development due to the shift of writing code to writing spec. The main issue is now our pipeline is stuck at review, so all that extra output is effectively wasted. Do you have any tips on what worked for you if you had a similar situation?
This is our new bottleneck too. Developers roles are shifting to spec writers and code reviewers more and more. I don’t think I’d call this wasted effort though (unless the code produced is worse than what developers would have produced otherwise). I’d think of it as a good problem to have.
We’re doing several things to alleviate this, and I’m genuinely curious how other teams are handling this too.
- We have Claude running code reviews on our PRs too 😄. In our department, a PR isn’t expected to be reviewed by a dev until the author has addressed or reviewed and dismissed all of the issues Claude has brought up.
- There is pressure for developers on our team to become better reviewers. I think this is good, because reviewing code is a more valuable skill to prospective employers than writing it is anyway.
If you’re stuck at review you aren’t seeing 10x development, you’re seeing 10x code generation.
This is especially important because without the review/test/deploy part of the pipeline you aren’t actually seeing any progress towards business goals.
Once you do get these parts sorted, you can then look at what multiplier you’re seeing.
That’s not to say there isn’t an improvement in your workflow, just that you can’t say with any certainty what kind of improvement without measuring the end to end.
It might turn out that the rest of the pipeline is way easier , in which case your multiplier will be higher, it might also be much harder, in which case the multiplier will be lower.
I’m not taking shots, i mean it seriously, especially if you need to report any of this to the rest of the business.
edit : In addition, if it turns out that review is going to be a bottleneck you can get extra resource pointed in that direction which will benefit the workflow overall.
another edit: i would consider correctly managing the expectations of those you report to as a vital skill.
Exactly this. My experience with our companies wrapper on Claude lines up with OP, not this comment thread.
Everyone seems to forget everything you write is a liability. You can’t have bugs in code that is never written or generated, comments that don’t exist never become inaccurate, not duplicating “knowledge” into a repo doesnt have a risk of not aligning with business goals long term as they change.
From what I’ve seen, people claiming a “10x increase” did not have a strong foundation to begin with and/or did not utilize tools like IDEs effectively. No offense to thread OP, which seems itself a generated response, but in the time he has done all of that a strong engineer would be long done. Everything listed should be done before ever getting into code along with business and product partners.
The reason you kept going around in circles and reintroducing bugs you already got rid of is because LLMs don’t remember things. Every time you tell it something it tells it the entire conversation again so it has all the parts. Eventually it runs out of room and starts cutting off the beginning of the convo and now the llm can’t ‘remember’ what it was you were even talking about.
For that you can ask to update a documentation/status file on every update. You can manually add the goal and/or tasks for the future.
With that, I improved my success a lot even when starting new sessions (add in the instructions file to use this file for reference, so you don’t have to remind every time)
Kind of, but it really depends on the workflow. Simple 3d math does not extend to a codebase that is impacted by context window
Our work started giving Claude access. Plugging sonnet 4.6 in with opencode I had it do some terragrunt code. It was mostly correct. Highly documented languages seem to be its best. The modules I had it write cost 4 bucks of tokens total.
It just gave insane ick using it though. I might just resign to using it though because of our backlog and burn out.
It’s not called “correct” coding for a reason.
That’s why people are wrong so often: they feel like something is right, but don’t check. That’s how you get anti -vaxxers, manospere people, MAGA, QAnon, Brexit, etc.
you need to fully be able to program to work with these things, in my experience.
you have to explain what you want very specifically, in precise programming terms.i tried a preview of chatgpt codex and it’s working better than my free version of claude, but codex creates a whole virtual programming environment, you have to connect it to a github repository, then it spins up an instance with tools you include and actually tests the code and fixes bugs before sending it back to you.
but you still need to be able to find the bugs and fix them yourself.oh and i think they work best with python, but i’ve also used ruby and dart and it’s decent.
it’s kinda like a power tool, it’ll definitely help you a lot to fix a car but if you can’t do it with wrenches it won’t help very much.I’ve never been able to program in anything more complex than BASIC and command line batch files, but I’m able to get useful output from Claude.
I’m an IT Infrastructure Manager by trade, and I got there through 20 years of supporting everything from desktop to datacenter including weird use cases like controlling systems in a research lab. On top of that, I’ve gotten under the hood of software in the form of running game servers in my spare time.
What you need to get good programs out of AI boils down to 3 things:
- The ability to teach an entity whose mistakes resemble those of a gifted child where it went wrong a step or ten back from where it’s currently looking.
- The ability to provide useful beta test / debug output regarding programs which aren’t behaving as expected. This does include looking at an error log and having some idea what that error means.
- Comfort using (either executing or compiling depending on the language) source code associated with the language you’re doing things in. This might be as simple as “How do I run a Powershell script or verify that I meet the version and module requirements for the script in question?”, or it might be as complicated as building an executable in Visual Studio. Either way whatever the pipeline is from source to execution, it must be a pipeline you’re comfortable working with. If you’re doing things anywhere outside the IT administration space, it’s reasonable to be looking at Python as the best first path rather than Powershell. Personally, I must go where supported first party modules exist for the types of work I’m developing around. In IT Administration, that’s Powershell.
I’ve made tools which automate and improve my entire department’s approach to user data, device data, application inventory, patch management, vulnerability management, and these are changes I started making with a free product three months ago, and two months back I switched to the paid version.
Programming is sort of like conversation in an alien language. For that reason, if you can give precise instructions sometimes you really can pull something new into existence using LLM coding. It’s the same reason that you could say words which have never been said in that specific order before, and have an LLM translate them to Portuguese.
I always used to talk about how everything in a computer was math, and that what interested me more than quantum computing would be a machine which starts performing the same sorts of operations on words or concepts that computers of that day ('90s and '00s when “quantum” was being slapped on everything to mean “fast” or “powerful”) were doing on math. I said that the best indicator when linguistic computing arrives would be that without ever learning to program, I’d start being able to program. I was looking at “Dragon Naturally Speaking” when I had this idea. It was one of the earliest effective speech to text programs. I stopped learning to program immediately and focused exclusively on learning operations from that point forward.
I’ve been testing the code generation abilities of LLMs for about three years. Within the last six months I feel like I’m starting to see evidence that the associations being made internally by LLMs are complex enough to begin considering them the fulfillment of my childhood dream of a “word computer”.
All the shitty stuff about environment and theft of art is all there too, which sucks, but more because our economic model sucks than because LLMs either do or do not suck. If we had a framework for meeting everybody’s basic needs, this software in its current state has the potential to turn everyone with a passion for grammatical and technical precision into a concept based developer practically overnight.
I have no qualifications to judge the quality of the generated results, yet the generated results are always of great quality.
Do you seriously not realize how out of touch this sounds?
Of course it sounds out of touch. I didn’t say it, or anything like it. Just like the other commenter, you seem to have stopped after the first sentence.
20 years of IT experience from a support perspective does qualify me to put anybody in the programming space on notice. The tools might not be as good as a talented and well trained dev, but they’re already better than a lazy dev. The output I get from Claude Code takes effort to get running. It just takes less of it than the output from my outsourced offshore MSP.
I’ve never been able to program in anything more complex than BASIC and command line batch files, but I’m able to get useful output from Claude.
Chatbots being deemed useful in tasks by people unqualified to make those judgments is a running problem.
I use it and it works. It doesn’t give you the right result in one shot, but neither does manual coding. You iterate and prompt again and again. In the end, it saves a ton of time. Engineers are definitely going to lose their jobs because fewer people are needed. I know its tough to accept this and people will go through denial. Part of that is saying the AI code is junk. But, you’ll find it can produce junk and quickly fix it into the right solution faster than an engineer can. It sucks, but this is the new reality. The one thing that is cool once you embrace it is that you realize you can customize your favorite apps or even build anything you want from scratch.
You still need programmers because you need people proficient in programming to be able to tell how to fix the junk that it generates into working code.
customize your favorite apps
can you elaborate?
Github is full of open source apps. Some times the maintainer won’t add a feature you want. You can just clone the repo and ask Claude to do it and then run your own version of it.
It sucks, but this is the new reality.
Sorry mate, but you drank the AI koolaid from Sam Altman and the other tech oligarchs. The reality is that all of the major AI companies are deep in the red, OpenAI isn’t even making a profit with the 200$ subscription.
The only reason people are able to burn thousands of tokens to vibecode their apps is that they don’t have to pay the price for that, the companies are. This money will run out soon and then we will see the real cost for the bigger models.
If a subscription for Claude Code costs 500$ or even 1000$, will companies still pay for it or let actual humans do the work? We will see. I seriously doubt it, and I don’t want to depend on a subscription-based service to do my work while my skills are atrophying. Thank god my employer doesn’t force me to use AI.
Engineers are definitely going to lose their jobs
This kind of fear-mongering is what I despise most about the whole bubble.
I haven’t drank Koolaid. I’m talking from my experience using it in my professional software engineering job where I lead software projects. I’ve built things that used to take 20 weeks in 1 week with Claude. My employer does not really care about the cost of the tokens. And, when they can have one engineer do 20 weeks of work in 1 week, that to them is actually a cost savings. I already ask myself the question … Should I give this task to another engineer or just vibecode it myself?
OpenAI may not survive because they do have financial issues from overspending, but that barely matters. The company with the strongest coding LLM is Anthropic and it doesn’t sound like they’re having financial difficulty. Either way, now that it is clear what is possible, some company will succeed.They have incentives to do it.
Like I said, it will suck for some people, but its hard to deny the reality at this point.
I’ve built things that used to take 20 weeks in 1 week with Claude.
That’s ridiculous. You’ve either been a bad coder even before the AI hype or you’re simply lying. I have used these tools and they’re not that good or make you that fast - except when you’re just merging all of the proposed code blind and hope for the best. I fear for the future colleagues who will have to work with the raging dumpster fire you have created for them.
The company with the strongest coding LLM is Anthropic and it doesn’t sound like they’re having financial difficulty
Oh yes, they have the same problems OpenAI has. Just look into the vibecoding subreddits, you can see many people complaining about excessive rate limits and their models getting dumber. A healthy company wouldn’t try to put a cap on the token useage and introduce peak-hour throttling, that’s a big warning sign that they’re overspending as well.
its hard to deny the reality at this point
I only see one person here denying reality. You will be effed in a major way when your employer one day decides that the subscriptions are too expensive or tell you to limit your token useage.
I know it is a big change and will take some time to come to terms with it. But, it is here. I’m not going to argue anymore. It’s pointless.

Did you just pull a random infographic out of your ass without even mentioning the source? I reverse-searched it and it comes from Anthropic, of all places - the guys that run Claude Code.
Forbes took a look at that study, I love this money quote from it:
These flaws turn Anthropic’s dataset into an overstated labor-market conclusion. The study’s findings do not have the level of reliability required to sustain the breadth of the headline framing, because each conclusion rests on an exposure measure whose scope (1), construction (2, 3, 4, 5, 7), and interpretation (6, 8, 9, 10) remain contested.
So yeah, an AI company telling us that AI will theoretically replace our jobs, based on their own study with flawed data - damn, that’s trustworthy! /s
I’m not going to argue anymore. It’s pointless.
At least on this point we agree.
I think the last part you said is the best way to use LLMs. I am not confident in it building complex architectures but if you want to make a dedicated single use script or a very customised basic application for personal use, it will do it well
Don’t just use it as a drop in replacement for a programmer; use it to automate menial tasks while employing trust but verify with every output it produces.
A well written CLAUDE.md and prompt to restrict it from auto committing, auto pushing, and auto editing without explicit verification before doing anything will keep everything in your control while also aiding menial maintenance tasks like repetitive sections or user tests.
verify with every output it produces.
I agree that you can get quality output using these tools, but if you actually take the time to validate and fix everything they’ve output then you spend more time than if you’d just written it, rob yourself of experience, and melt glaciers for no reason in the process.
prompt to restrict it from auto committing, auto pushing, and auto editing without explicit verification
Anything in the prompt is a suggestion, not a restriction. You are correct you should restrict those actions, but it must be done outside of the chatbot layer. This is part of the problem with this stuff. People using it don’t understand what it is or how it works at all and are being ridiculously irresponsible.
repetitive sections
Repetitive sections that are logic can be factored down and should be for maintainability. Those that can’t be can be written with tons of methods. A list of words can be expanded into whatever repetitive boilerplate with sed, awk, a python script etc and you’ll know nothing was hallucinated because it was deterministic in the first place.
user tests.
Tests are just as important as the rest of the code and should be given the same amount of attention instead of being treated as fine as long as you check the box.
Also working on some 3d maths.
I’ve used the free versions a bit, but not really to the extent that I’d call it vibe coding. The chat bots often know where to find libraries or per-existing functions that I don’t know. It’s also okay at algorithms for well defined problems, but it often says be careful not to do something I absolutely need to do or visea versa. It’s very hit and miss on debugging. It’ll point out obvious stuff (typos) reliably, and it can do some iteration stuff usually, but it usually doesn’t pick up on other things. Once in a rare while it will impress me by suggesting I look at a particular thing, and I think it manages this better in new chats, but most complex issues fail for it. I use it as a faster stackoverflow, but you need to be able to work through the code yourself, understand what you’re doing, and test that individual steps are doing what they need to do. The bots can’t really do any sort of planning or breaking down a problem into sub-problems, and they really suck at thinking about 3d stuff.
I have yet to be able to vibe code anything relatively involved. The closest I’ve come is a ffmpeg wrapper script to edit out scenes from a video with a fade in/fade out title card. But even then, I ended up at some point having to debug and add my own arg support because it kept screwing things up. The first draft did do something, though.
I find at this point that it’s still only useful if I have a very clear goal in mind with a lot of context on the area I need to make changes to. That lets me get a more specific prompt, and then I’ll still need to review the output. I have only ever gotten a successful one shot like this with tests.
Have you been coding professionally long?
I find that the only time I can use these chatbots for a task I really need to already know what i’m doing so that I can read the output and fix the issues. This is like having junior devs on your team and being a code reviewer more than being a full time coder. They get a lot of things wrong but there’s so much usable that you can save a ton of time over doing everything yourself from scratch.
Just like with junior devs, you can send them back to fix what you know is wrong and give them feedback to improve various things you would prefer done another way. There’s no emotions though, so you can just be blunt and concise with feedback.
Nice comparison, but the bugs created by junior software developers are usually much easier to find than the bugs created by LLMs.
They get a lot of things wrong but there’s so much usable that you can save a ton of time over doing everything yourself from scratch.
Your experience with Junior devs has been quite different from mine.
I work with Junior devs because someday they will be senior devs who owe me a favon, even though they’ve always only costed me time.
Edit: I also work with junior devs because sometimes a tiny corner of my job is both mind-numbingly boring, and also weirdly difficult to automate away.
I assign that work to junior devs because I don’t want to do it.
In doing so, I am wasting the boss’s money, since I could do it faster.
But I consider it but just another part of the price of hiring me, because it keeps me happy.
You can’t really just use Claude code raw. You have to give it detailed instructions, use Claude skills,observe results, update prompts. It can be just as consuming, but rather that doing the productive work, you’re just reviewing and correcting AI. People who have success using AI have invested time in their setup and are continuously adjusting it.
But all in all extremely much faster. That’s the reason it is not useless. Everyone whines that it takes so much time when no it is not close to manual. It’s not a magic pill and you need the know how still, but no, it does not take “just as time consuming”. You are more productive. But yes, it is also more boring.
The biggest benefit from LLM even just belping with coding is I never have to open the hellsite of assholes that is Stack Overflow.
Fuck SO forever.
What did SO do to warrant such emotion?
This question has been asked 1000 times before if you were not so stupid you would have used the search and weeded through 10,000 results, most for outdated versions of your question, to find the answer but then you are using PHP instead of GoSwift++, the hot new flash in the pants .0001ms faster code language so of course you are stupid.
– Average SO reply bot
Sums up my experience.
I didn’t use SO much, but the people can be… Difficult.






