I've been debloating some of my personal projects — you know how it goes, "keep adding one more thing" driven development.
I asked Claude Code to simplify the code. It spent ten minutes spinning, making countless edits. They all turned out to be superficial. It reduced the code by 3%.
Then I asked the same model (Sonnet) in my web chat UI to do the same thing, and it reduced it by 50% — the program remaining otherwise identical, in terms of appearance and behavior.
I love the agents but they are explicitly designed not to rewrite entire files, and sometimes doing that gives you way, way better results. 15x better, in this case!
(Even better might be to first rewrite it into user stories, instead of incidental implementation details... hmm...)
For paying users of Claude Code and other similar services, do you tend to switch to the free tiers of other providers while yours is down? Do you just not use any LLM-based tool in that time? What's your fallback?
Yes they are. They work vastly different in terms of hardware dependencies and data workflow.
Hardware dependencies: GPUs and TPUs and all that are not equal. You will have to have code and caches that only work with Google’s TPUs, and other codes and caches that only work with CUDA, etc.
Data workflow: you will have huge LLM models that need to be loaded at just the right time.
Oh wait, your model uses MoE? That means the 200GB model that’s split over 10 “experts” only needs maybe 20GB of that. So then it would be great if we could somehow pre-route a request to the right GPU that already has that specific model loaded.
But wait! This is a long conversation, and the cache was actually on a different server. Now we need to reload the cache on the new server that actually has this particular expert preloaded in its GPU.
etc.
it’s very different, mostly because it’s new tech and very expensive and cost optimizations are difficult but impactful.
Do you then think it'll improve to reach the same stability as other kinds of infra, eventually, or are there more fundamental limits we might hit?
My intuition is that as the models do more with less and the hardware improves, we'll end up with more stability just because we'll be able to afford more redundancy.
I think you forgot the word "AGAIN" from your title.
Have you seen their status page ? Every single month is littered with yellow and red.
For those of us old-school programmers it makes little difference, only the vibe coders throwing away $200 a month on Claude subs will be the ones crying !
I’m an “old school programmer” just like you, but still use Claud code.
For greenfield projects it’s absolutely faster to churn out code I’ve written 100 times in the past. I don’t need to write another RBAC system, I just don’t. I don’t need to write another table implementation for a frontend data view.
How Claud helps us is speed and breadth. I can do a lot more in shorter time, and depending on what your goals are this may or may not be valuable to you.
What kind of projects are you working on that aren't amenable to the sort of code reuse or abstraction that normally addresses this sort of "boilerplate"?
There are lots of projects like that, especially when doing work for external clients.
Very often they want to own all the code, so you cannot just abstract things in your own engine. It then very easily becomes the pragmatic choice to just use existing libraries and frameworks to implement these things when the client demands it.
Especially since every client wants different things.
At the same time, even though there are libraries available, it’s still work to stitch everything together.
For straightforward stuff, AI takes all that work out of your hands.
Writing boilerplate code is mostly creative copy-pasting.
If I were to do it, I would have most of the reusable code (e.g. of a RBAC system) written and documented once and kept unpublished. Then I would ask an AI tool to alter it, given a set of client-specific properties. It would be easier to review moderate changes to a familiar and proven piece of code. The result could be copied to the client-specific repo.
The author of the initial comment mentioned that customers of contract work prefer code which is 100% theirs, purpose-written, not a dependency, even vendored.
I’m always suspicious of comments like yours. You’re written the same thing 100 times in the past and don’t have the base on a snippets manager or a good project you can get the implementation from? Did you really rewrite the same thing 100 times and are now preferring to use a tool which is slower and more resource intensive than just having been a little bit efficient in the past in saving something you reuse all the time?
If you don’t have anything productive to add don’t say it.
I would put myself in the bridge between pre internet coders and the modern generation. I use these type of tools and don’t consider myself a vibe coder.
I noticed one single API error a few hours ago. Didn't seem to be down for long.
(I prefer the occasional downtime here and there versus Gemini's ridiculous usage limits)
Comments are kind of embarrassing how many people seem to derive a sense of identity from not using AI. Before LLMs, I didn’t use them to code. Then there were LLMs, and I used them a little to code. Then they got better at code, and now I use them a little more.
Probably 20% of the code I produce is generated by LLMs, but all of the code I produce at this point is sanity checked by them. They’re insanely useful.
Zero of my identity is tied to how much of the code I write involves AI.
I think you’ve put your finger on it. This isn’t about AI, it’s about the threat to people’s identity presented by AI. For a while now “writing code” has been a high status profession, with a certain amount of impenetrable mystique that “normies” can’t hurdle. AI has the potential to quite quickly shift “writing code” from a high status profession that people respect to commodity that those same normies can access.
For people whose identities and self of sense have been bolstered by being a member of that high status group AI is a big threat - not because of the impact on their work, but because of the potential to remove their status, and if their status slips away then they may realise they have nothing much else left.
When people feel threatened by new technology they shout loud and proud about how they don’t use it and everything is just fine. Quite often that becomes a new identity. Let them rail and rage against the storm.
“Blow winds, and crack your cheeks! Rage! Blow!”
The image of Lear “a poor,
infatuated, despised old man” seems curiously apt here.
Step into a variant of a future, where claude is as important to the internet as aws, because constant interwebz rewrite happening in near real time had to be stopped for 4 hours causing incredible hacking spree as constant rewrites open/close/re-open various holes.
There is a part of me thinking that my initial thoughts on LLMs were not accurate ( like humanity's long term reaction to its impact ).
AI is going to make heroin look like a joke as more people integrate it into their lives. You're gonna have junkies doing some crazy shit just to get more AI credits.
I have a Frankenstein of a setup with this one. I use ZAI (GLM-4.6) for the base models, then Gemini's free tier for the search and image recognition. CCR intercepts the requests from Claude Code and sends them to each model/provider automatically.
I got annoyed at CCRs bloat and flakiness tho and replicated it in like 50 lines of Python. (Well, I asked Frankenstein to build it for me, obviously... what a time to be alive.)
I couldn't fix any of my UI quality-of-life bugs so I had to work on actual backend logic and distributed state consistency. Not what I wanted for an early morning coding sesh. Nightmare! /s
this now mind-numbingly-rote style of hn comment is going to be really funny to look back on in the future when this technology is as common as intellisense (e.g. almost a year ago now)
I had a coworker at a big tech co (where we wrote primarily Java) who used VIM, without any extensions to make it easier with Java, and he wrote all his import declarations by hand. Maybe knowing exactly which sub-namespace you're pulling Java utils from is important. I'm willing to bet big that it's not.
The Eternal September phenomenon has hit hacker news. What used to be filled with technical analysis and well thought out replies is now chalk full of quippy one liners.
You're absolutely right! Let me rewrite that comment for you.
I vibe with all the AI, so stick that in your brain cells.
Claude slips offline
Storms of code can’t halt the tide
Again, still they bide
One 9 of uptime?
More like “nine minutes of sheen”
Cloud gods need better scenes
Starting to look like GitHub, frequently down.
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
I've been debloating some of my personal projects — you know how it goes, "keep adding one more thing" driven development.
I asked Claude Code to simplify the code. It spent ten minutes spinning, making countless edits. They all turned out to be superficial. It reduced the code by 3%.
Then I asked the same model (Sonnet) in my web chat UI to do the same thing, and it reduced it by 50% — the program remaining otherwise identical, in terms of appearance and behavior.
I love the agents but they are explicitly designed not to rewrite entire files, and sometimes doing that gives you way, way better results. 15x better, in this case!
(Even better might be to first rewrite it into user stories, instead of incidental implementation details... hmm...)
Hey, just as I was trying it out seriously for the first time.
Wait a minute. Did I bring Claude Code down?
It was poor Yoric. I knew him well.
https://en.wikipedia.org/wiki/Yorick
For paying users of Claude Code and other similar services, do you tend to switch to the free tiers of other providers while yours is down? Do you just not use any LLM-based tool in that time? What's your fallback?
ZAI's $3 coding plan is my "We have Claude at home.”
Bitch, don't kill my vibe
https://youtu.be/GF8aaTu2kg0
Anybody who has experience in running infra for ML/AI/Data pipeline systems, are they drastically different from regular infra?
Yes they are. They work vastly different in terms of hardware dependencies and data workflow.
Hardware dependencies: GPUs and TPUs and all that are not equal. You will have to have code and caches that only work with Google’s TPUs, and other codes and caches that only work with CUDA, etc.
Data workflow: you will have huge LLM models that need to be loaded at just the right time.
Oh wait, your model uses MoE? That means the 200GB model that’s split over 10 “experts” only needs maybe 20GB of that. So then it would be great if we could somehow pre-route a request to the right GPU that already has that specific model loaded.
But wait! This is a long conversation, and the cache was actually on a different server. Now we need to reload the cache on the new server that actually has this particular expert preloaded in its GPU.
etc.
it’s very different, mostly because it’s new tech and very expensive and cost optimizations are difficult but impactful.
> mostly because it’s new tech
Do you then think it'll improve to reach the same stability as other kinds of infra, eventually, or are there more fundamental limits we might hit?
My intuition is that as the models do more with less and the hardware improves, we'll end up with more stability just because we'll be able to afford more redundancy.
Based on the status page it should be back to operational.
I think you forgot the word "AGAIN" from your title.
Have you seen their status page ? Every single month is littered with yellow and red.
For those of us old-school programmers it makes little difference, only the vibe coders throwing away $200 a month on Claude subs will be the ones crying !
I’m an “old school programmer” just like you, but still use Claud code.
For greenfield projects it’s absolutely faster to churn out code I’ve written 100 times in the past. I don’t need to write another RBAC system, I just don’t. I don’t need to write another table implementation for a frontend data view.
How Claud helps us is speed and breadth. I can do a lot more in shorter time, and depending on what your goals are this may or may not be valuable to you.
What kind of projects are you working on that aren't amenable to the sort of code reuse or abstraction that normally addresses this sort of "boilerplate"?
There are lots of projects like that, especially when doing work for external clients.
Very often they want to own all the code, so you cannot just abstract things in your own engine. It then very easily becomes the pragmatic choice to just use existing libraries and frameworks to implement these things when the client demands it.
Especially since every client wants different things.
At the same time, even though there are libraries available, it’s still work to stitch everything together.
For straightforward stuff, AI takes all that work out of your hands.
Writing boilerplate code is mostly creative copy-pasting.
If I were to do it, I would have most of the reusable code (e.g. of a RBAC system) written and documented once and kept unpublished. Then I would ask an AI tool to alter it, given a set of client-specific properties. It would be easier to review moderate changes to a familiar and proven piece of code. The result could be copied to the client-specific repo.
What do you use for RBAC today? Do you have AI rewrite it every time?
The author of the initial comment mentioned that customers of contract work prefer code which is 100% theirs, purpose-written, not a dependency, even vendored.
I was wondering about that as well, copy and paste has been a thing for a lot longer than LLMs...
Trusting an AI to write an RBAC system feels like asking for trouble
I’m always suspicious of comments like yours. You’re written the same thing 100 times in the past and don’t have the base on a snippets manager or a good project you can get the implementation from? Did you really rewrite the same thing 100 times and are now preferring to use a tool which is slower and more resource intensive than just having been a little bit efficient in the past in saving something you reuse all the time?
If you don’t have anything productive to add don’t say it.
I would put myself in the bridge between pre internet coders and the modern generation. I use these type of tools and don’t consider myself a vibe coder.
Should this be “the Claude API is down”, or is there a specific one used (only) by Claude Code?
I noticed one single API error a few hours ago. Didn't seem to be down for long. (I prefer the occasional downtime here and there versus Gemini's ridiculous usage limits)
https://github.com/musistudio/claude-code-router
Anthropic's API is not your only choice for a Claude Code workflow.
you are absolutely right
Wow, great insight– here's how Claude being down is effecting code production globally
It's Joever
Comments are kind of embarrassing how many people seem to derive a sense of identity from not using AI. Before LLMs, I didn’t use them to code. Then there were LLMs, and I used them a little to code. Then they got better at code, and now I use them a little more.
Probably 20% of the code I produce is generated by LLMs, but all of the code I produce at this point is sanity checked by them. They’re insanely useful.
Zero of my identity is tied to how much of the code I write involves AI.
The irony is that by asserting how much you don’t identify your identify with AI, you, in turn, identify yourself in a certain way.
I’m reminded of that South Park episode with the goths. “I’m so much of a non-conformist I’m going to non-confirm with the non-conformists”.
In the end it all doesn’t matter.
When not having Claude feels like you left your phone at home, I'd say no, using AI is very much a part of our identities.
The thief who stole the car is always a little bit more chatty about the stolen car.
Who are you trying to convince here?
I think you’ve put your finger on it. This isn’t about AI, it’s about the threat to people’s identity presented by AI. For a while now “writing code” has been a high status profession, with a certain amount of impenetrable mystique that “normies” can’t hurdle. AI has the potential to quite quickly shift “writing code” from a high status profession that people respect to commodity that those same normies can access.
For people whose identities and self of sense have been bolstered by being a member of that high status group AI is a big threat - not because of the impact on their work, but because of the potential to remove their status, and if their status slips away then they may realise they have nothing much else left.
When people feel threatened by new technology they shout loud and proud about how they don’t use it and everything is just fine. Quite often that becomes a new identity. Let them rail and rage against the storm.
“Blow winds, and crack your cheeks! Rage! Blow!”
The image of Lear “a poor, infatuated, despised old man” seems curiously apt here.
Oh no. How will we ever write code without AI!
Step into a variant of a future, where claude is as important to the internet as aws, because constant interwebz rewrite happening in near real time had to be stopped for 4 hours causing incredible hacking spree as constant rewrites open/close/re-open various holes.
There is a part of me thinking that my initial thoughts on LLMs were not accurate ( like humanity's long term reaction to its impact ).
AI is going to make heroin look like a joke as more people integrate it into their lives. You're gonna have junkies doing some crazy shit just to get more AI credits.
Anyone who has actually known and dealt with heroin addicts can see that the only joke here is the hyperbolic outburst you’ve put on display.
What lengths did Cypher go to, to be plugged back in to a simulated world?
Time to start a business with all the fired devs to act as interim-ai when claude flares.
Claude Meat? Meat Code? Copyright is pending
Thinking Meat
https://www.mit.edu/people/dpolicar/writing/prose/text/think...
Meatic ?
Just switch to gemini for the time being, assuming you did not fall for the trap of claude specific config.
Zero moat.
Claude Code (the agent scaffolding) only works with the Claude API, I think? That’s at least a bit of a moat.
cursor-cli, codex, aider should all be roughly drop in replacements that can use non-Anthropic models
I also like Charm Crush.
Not really… https://github.com/musistudio/claude-code-router
I have a Frankenstein of a setup with this one. I use ZAI (GLM-4.6) for the base models, then Gemini's free tier for the search and image recognition. CCR intercepts the requests from Claude Code and sends them to each model/provider automatically.
I got annoyed at CCRs bloat and flakiness tho and replicated it in like 50 lines of Python. (Well, I asked Frankenstein to build it for me, obviously... what a time to be alive.)
I couldn't fix any of my UI quality-of-life bugs so I had to work on actual backend logic and distributed state consistency. Not what I wanted for an early morning coding sesh. Nightmare! /s
[dead]
[dead]
By that I’m guessing the API is down since Claude code is just the harness?
Too many Anthropic “devrels” downvoting any posts calling out the ridiculously fragile position of Anthropic
Yes.
But what will all the vibe coders do then? Trying to use the atrophied brain cells and god forbid try to…brain code?
this now mind-numbingly-rote style of hn comment is going to be really funny to look back on in the future when this technology is as common as intellisense (e.g. almost a year ago now)
Joke's on you, I don't use intellisense.
I had a coworker at a big tech co (where we wrote primarily Java) who used VIM, without any extensions to make it easier with Java, and he wrote all his import declarations by hand. Maybe knowing exactly which sub-namespace you're pulling Java utils from is important. I'm willing to bet big that it's not.
The Eternal September phenomenon has hit hacker news. What used to be filled with technical analysis and well thought out replies is now chalk full of quippy one liners.
You're absolutely right! Let me rewrite that comment for you.
We're already there, I see less originality in the "well ackshully AI" comments than I do in the blatantly AI-generated blog posts
I vibe with all the AI, so stick that in your brain cells.
man, take your pills