Birbla

Login
    Andrej Karpathy: Software in the era of AI [video] (youtube.com)
    1348 points by sandslash - 2 days ago

  • I think it's interesting to juxtapose traditional coding, neural network weights and prompts because in many areas -- like the example of the self driving module having code being replaced by neural networks tuned to the target dataset representing the domain -- this will be quite useful.

    However I think it's important to make it clear that given the hardware constraints of many environments the applicability of what's being called software 2.0 and 3.0 will be severely limited.

    So instead of being replacements, these paradigms are more like extra tools in the tool belt. Code and prompts will live side by side, being used when convenient, but none a panacea.

    by gchamonlive - 2 days ago
  • Thank you YC for posting this before the talk became deprecated[1]

    1: https://x.com/karpathy/status/1935077692258558443

    by nico - 2 days ago
  • Well that showed up significantly faster than they said it would.
    by jppope - 2 days ago
  • loved the analogies! Karpathy is consistently one of the clearest thinkers out there.

    interesting that Waymo could do uninterrupted trips back in 2013, wonder what took them so long to expand? regulation? tailend of driving optimization issues?

    noticed one of the slides had a cross over 'AGI 2027'... ai-2027.com :)

    by anythingworks - 2 days ago
  • Love his analogies and clear eyed picture
    by AIorNot - 2 days ago
  • It's an interesting presentation, no doubt. The analogies eventually fail as analogies usually do.

    A recurring theme presented, however, is that LLM's are somehow not controlled by the corporations which expose them as a service. The presenter made certain to identify three interested actors (governments, corporations, "regular people") and how LLM offerings are not controlled by governments. This is a bit disingenuous.

    Also, the OS analogy doesn't make sense to me. Perhaps this is because I do not subscribe to LLM's having reasoning capabilities nor able to reliably provide services an OS-like system can be shown to provide.

    A minor critique regarding the analogy equating LLM's to mainframes:

      Mainframes in the 1960's never "ran in the cloud" as it did
      not exist.  They still do not "run in the cloud" unless one
      includes simulators.
    
      Terminals in the 1960's - 1980's did not use networks.  They
      used dedicated serial cables or dial-up modems to connect
      either directly or through stat-mux concentrators.
    
      "Compute" was not "batched over users."  Mainframes either
      had jobs submitted and ran via operators (indirect execution)
      or supported multi-user time slicing (such as found in Unix).
    by AdieuToLogic - 2 days ago
  • The comparison of our current methods of interacting with LLMs (back and forth text) to old-school terminals is pretty interesting. I think there's still a lot work to be done to optimize how we interact with these models, especially for non-dev consumers.
    by wjohn - 2 days ago
  • llms.txt makes a lot of sense, especially for LLMs to interact with http APIs autonomously.

    Seems like you could set a LLM loose and like the Google Bot have it start converting all html pages into llms.txt. Man, the future is crazy.

    by nodesocket - 2 days ago
  • This was my favorite talk at AISUS because it was so full of concrete insights I hadn't heard before and (even better) practical points about what to build now, in the immediate future. (To mention just one example: the "autonomy slider".)

    If it were up to me, which it very much is not, I would try to optimize the next AISUS for more of this. I felt like I was getting smarter as the talk went on.

    by dang - 2 days ago
  • Can we please stop standardizing on putting things in the root?

    /.well-known/ exists for this purpose.

    example.com/.well-known/llms.txt

    https://en.m.wikipedia.org/wiki/Well-known_URI

    by sneak - 2 days ago
  • A few days ago, I was introduced to the idea that when you're vibe coding, you're consulting a "genie", much like in the fables, you almost never get what you asked for, but if your wishes are small, you might just get what you want.

    The primagen reviewed this article[1] a few days ago, and (I think) that's where I heard about it. (Can't re-watch it now, it's members only) 8(

    [1] https://medium.com/@drewwww/the-gambler-and-the-genie-08491d...

    by mikewarot - 2 days ago
  • Him claiming govts don't use AI or are behind the curve is not accurate.

    Modern military drones are very much AI agents

    by fnord77 - 2 days ago
  • Great talk, thanks for putting it online so quickly. I liked the idea of making the generation / verification loop go brrr, and one way to do this is to make verification not just a human task, but a machine task, where possible.

    Yes, I am talking about formal verification, of course!

    That also goes nicely together with "keeping the AI on a tight leash". It seems to clash though with "English is the new programming language". So the question is, can you hide the formal stuff under the hood, just like you can hide a calculator tool for arithmetic? Use informal English on the surface, while some of it is interpreted as a formal expression, put to work, and then reflected back in English? I think that is possible, if you have a formal language and logic that is flexible enough, and close enough to informal English.

    Yes, I am talking about abstraction logic [1], of course :-)

    So the goal would be to have English (German, ...) as the ONLY programming language, invisibly backed underneath by abstraction logic.

    [1] http://abstractionlogic.com

    by practal - 2 days ago
  • It’s fascinating to think about what true GUI for LLM could be like.

    It immediately makes me think a LLM that can generate a customized GUI for the topic at hand where you can interact with in a non-linear way.

    by hgl - 2 days ago
  • I love the "people spirits" analogy. For casual tasks like vibecoding or boiling an egg, LLM errors aren't a big deal. But for critical work, we need rigorous checks—just like we do with human reasoning. That's the core of empirical science: we expect fallibility, so we verify. A great example is how early migration theories based on pottery were revised with better data like ancient DNA (see David Reich). Letting LLMs judge each other without solid external checks misses the point—leaderboard-style human rankings are often just as flawed.
    by bedit - 2 days ago
  • Where do these analogies break down?

    1. Similar cost structure to electricity, but non-essential utility (currently)?

    2. Like an operating system, but with non-determinism?

    3. Like programming, but ...?

    Where does the programming analogy break down?

    by nilirl - 2 days ago
  • I find Karpathy's focus on tightening the feedback loop between LLMs and humans interesting, because I've found I am the happiest when I extend the loop instead.

    When I have tried to "pair program" with an LLM, I have found it incredibly tedious, and not that useful. The insights it gives me are not that great if I'm optimising for response speed, and it just frustrates me rather than letting me go faster. Worse, often my brain just turns off while waiting for the LLM to respond.

    OTOH, when I work in a more async fashion, it feels freeing to just pass a problem to the AI. Then, I can stop thinking about it and work on something else. Later, I can come back to find the AI results, and I can proceed to adjust the prompt and re-generate, to slightly modify what the LLM produced, or sometimes to just accept its changes verbatim. I really like this process.

    by sothatsit - 2 days ago
  • I think that Andrej presents “Software 3.0” as a revolution, but in essence it is a natural evolution of abstractions.

    Abstractions don't eliminate the need to understand the underlying layers - they just hide them until something goes wrong.

    Software 3.0 is a step forward in convenience. But it is not a replacement for developers with a foundation, but a tool for acceleration, amplification and scaling.

    If you know what is under the hood — you are irreplaceable. If you do not know — you become dependent on a tool that you do not always understand.

    by dmitrijbelikov - 2 days ago
  • Should we not treat LLMs more as a UX feature to interact with a domain specific model (highly contextual), rather than expecting LLMs to provide the intelligence needed for software to act as partner to Humans.
    by ast0708 - 2 days ago
  • why does vibe coding still involve any code at all? why can't an AI directly control the registers of a computer processor and graphics card, controlling a computer directly? why can't it draw on the screen directly, connected directly to the rows and columns of an LCD screen? what if an AI agent was implemented in hardware, with a processor for AI, a normal computer processor for logic, and a processor that correlates UI elements to touches on the screen? and a network card, some RAM for temporary stuff like UI elements and some persistent storage for vectors that represent UI elements and past converstations
    by alightsoul - 2 days ago
  • Painful to watch. The new tech generation deserves better than hyped presentations from tech evangelists.

    This reminds me of the Three Amigos and Grady Booch evangelizing the future of software while ignoring the terrible output from Rational Software and the Unified Process.

    At least we got acknowledgment that self-driving remains unsolved: https://youtu.be/LCEmiRjPEtQ?t=1622

    And Waymo still requires extensive human intervention. Given Tesla's robotaxi timeline, this should crash their stock valuation...but likely won't.

    You can't discuss "vibe coding" without addressing security implications of the produced artifacts, or the fact that you're building on potentially stolen code, books, and copyrighted training data.

    And what exactly is Software 3.0? It was mentioned early then lost in discussions about making content "easier for agents."

    by belter - 2 days ago
  • In the era of AI and illiteracy...
    by nottorp - 2 days ago
  • Tight feedback loops are the key in working productively with software. I see that in codebases up to 700k lines of code (legacy 30yo 4GL ERP systems).

    The best part is that AI-driven systems are fine with running even more tight loops than what a sane human would tolerate.

    Eg. running full linting, testing and E2E/simulation suite after any minor change. Or generating 4 versions of PR for the same task so that the human could just pick the best one.

    by abdullin - 2 days ago
  • You can generate 1.0 programs with 3.0 programs. But can you generate 2.0 programs the same way?
    by benob - 2 days ago
  • The quite good blog post mentioned by Karpathy for working with LLMs when building software:

    - https://blog.nilenso.com/blog/2025/05/29/ai-assisted-coding/

    See also:

    - https://news.ycombinator.com/item?id=44242051

    by amai - 2 days ago
  • Software 3.0 is the code generated by the machine, not the prompts that generated it. The prompts don't even yield the same output; there is randomness.

    The new software world is the massive amount of code that will be burped out by these agents, and it should quickly dwarf the human output.

    by blobbers - 2 days ago
  • The beginning was painful to watch as is the cheering in this comment section.

    The 1.0, 2.0, and 3.0 simply aren't making sense. They imply a kind of a succession and replacement and demonstrate a lack of how programming works. It sounds as marketing oriented as "Web 3.0" that has been born inside an echo chamber. And yet halfway through, the need for determinism/validation is now being reinvented.

    The analogies make use of cherry picked properties, which could apply to anything.

    by politelemon - 2 days ago
  • There were some cool ideas- I particularly liked "psychology of AI"

    Overall though I really feel like he is selling the idea that we are going to have to pay large corporations to be able to write code. Which is... terrifying.

    Also, as a lazy developer who is always trying to make AI do my job for me, it still kind of sucks, and its not clear that it will make my life easier any time soon.

    by fergie - 1 day ago
  • Is it possible to vibe code NFT smart contracts with Software 3.0?
    by pera - 1 day ago
  • Can't believe they wanted to postpone this video by a few weeks
    by romain_batlle - 1 day ago
  • [flagged]
    by William_BB - 1 day ago
  • He sounds like Terrence Howard with his nonsense.
    by iLoveOncall - 1 day ago
  • Meanwhile, I asked this morning Claude 4 to write a simple EXIF normalizer. After two rounds of prompting it to double-check its code, I still had to point out that it makes no sense to load the entire image for re-orientating if the EXIF orientation is fine in the first place.

    Vibe vs reality, and anyone actually working in the space daily can attest how brittle these systems are.

    Maybe this changes in SWE with more automated tests in verifiable simulators, but the real world is far to complex to simulate in its vastness.

    by mentalgear - 1 day ago
  • The slide at 13m claims that LLMs flip the script on technology diffusion and give power to the people. Nothing could be further from the truth.

    Large corporations, which have become governments in all but name, are the only ones with the capability to create ML models of any real value. They're the only ones with access to vast amounts of information and resources to train the models. They introduce biases into the models, whether deliberately or not, that reinforces their own agenda. This means that the models will either avoid or promote certain topics. It doesn't take a genius to imagine what will happen when the advertising industry inevitably extends its reach into AI companies, if it hasn't already.

    Even open weights models which technically users can self-host are opaque blobs of data that only large companies can create, and have the same biases. Even most truly open source models are useless since no individual has access to the same large datasets that corporations use for training.

    So, no, LLMs are the same as any other technology, and actually make governments and corporations even more powerful than anything that came before. The users benefit tangentially, if at all, but will mostly be exploited as usual. Though it's unsurprising that someone deeply embedded in the AI industry would claim otherwise.

    by imiric - 1 day ago
  • His dismissal of smaller and local models suggests he underestimates their improvement potential. Give phi4 a run and see what I mean.
    by khalic - 1 day ago
  • It's fascinating to see his gears grinding at 22:55 when acknowledging that a human still has to review the thousand lines of LLM-generated code for bugs and security issues if they're "actually trying to get work done". Yet these are the tools that are supposed to make us hyperproductive? This is "Software 3.0"? Give me a break.
    by imiric - 1 day ago
  • I'd like to hear from Linux kernel developers. There is no significant software that has been written (plagiarized) by "AI". Why not ask the actual experts who deliver instead of talk?

    This whole thing is a religion.

    by bgwalter - 1 day ago
  • when I started coding at the age of 11 in machine code and assembly on the C64, the dream was to create software that creates software. Nowadays it's almost reality, almost because the devil is always in the details. When you're used to write code, writing code is relatively fast. You need this knowledge to debug issues with generated code. However you're now telling AI to fix the bugs in the generated code. I see it kind of like machine code becomes overlaid with asm which becomes overlaid with C or whatever higher level language, which then uses dogma/methodology like MVC and such and on top of that there's now the AI input and generation layer. But it's not widely available. Affording more than 1 computer is a luxury. Many households are even struggling to get by. When you see those what 5 7 Mac Minis, which normal average Joe can afford that or does even have to knowledge to construct an LLM at home? I don't. This is a toy for rich people. Just like with public clouds like AWS, GCP I left out, because the cost is too high and running my own is also too expensive and there are cheaper alternatives that not only cost less but also have way less overhead.

    What would be interesting to see is what those kids produced with their vibe coding.

    by darqis - 1 day ago
  • I was trying to do some reverse engineering with Claude using an MCP server I wrote for a game trainer program that supports Python scripts. The context window gets filled up _so_ fast. I think my server is returning too many addresses (hex) when Claude searches for values in memory, but it’s annoying. These things are so flaky.
    by yahoozoo - 1 day ago
  • I hope this excellent talk brings some much needed sense into the discourse around vibe coding.
    by kaycey2022 - 1 day ago
  • Vibe coding is making a LEGO furniture, getting it run on the cloud is assembling the IKEA table for a busy restaurant
    by huksley - 1 day ago
  • What is this "clerk" library he used at this timestamp to tell him what to do? https://youtu.be/LCEmiRjPEtQ?si=XaC-oOMUxXp0DRU0&t=1991

    Gemini found it via screenshot or context: https://clerk.com/

    This is what he used for login on MenuGen: https://karpathy.bearblog.dev/vibe-coding-menugen/

    by beacon294 - 1 day ago
  • https://github.com/EvolvingAgentsLabs/llmunix

    An experiment to explore Kaparthy ideas

    by matiasmolinas - 1 day ago
  • the fanboying for this dudes opinion is insane.
    by Aeroi - 1 day ago
  • It's going to be very interesting to see how things evolve in enterprise IT, especially but not exclusively in regulated industries. As more SaaS services are at least partly vibe coded, how are CIOs going to understand and mitigate risk? As more internal developers are using LLM-powered coding interfaces and become less clear on exactly how their resulting code works, how will that codebase be maintained and incrementally updated with new features, especially in solo dev teams (which is common)?

    I easily see a huge future for agentic assistance in the enterprise, but I struggle mightily to see how many IT leaders would accept the output code of something like a menugen app as production-viable.

    Additionally, if you're licensing code from external vendors who've built their own products at least partly through LLM-driven superpowers, how do you have faith that they know how things work and won't inadvertently break something they don't know how to fix? This goes for niche tools (like Clerk, or Polar.sh or similar) as much as for big heavy things (like a CRM or ERP).

    I was on the CEO track about ten years ago and left it for a new career in big tech, and I don't envy the folks currently trying to figure out the future of safe, secure IT in the enterprise.

    by eitally - 1 day ago
  • Software 3.0 is where Engineers only create the kernel or seed of an idea. Then all users are developers creating their own branch using the feedback loop of their own behavior.
    by poorcedural - 1 day ago
  • He's talking about "LLM Utility companies going down and the world becoming dumber" as a sign of humanity's progress.

    This if anything should be a huge red flag

    by greybox - 1 day ago
  • After Cursor is sold for $3B, they should transfer Karpathy 20%. (it also went viral before thanks to him tweeting about it)

    Great talk like always. I actually disagree on a few things with him. When he said "why would you go to ChatGPT and copy / paste, it makes much more sense to use a GUI that is integrated to your code such as Cursor".

    Cursor and the like take a lot of the control from the user. If you optimize for speed then use Cursor. But if you optimize for balance of speed, control, and correctness, then using Cursor might not be the best solution, esp if you're not an expert of how to use it.

    It seems that Karpathy is mainly writing small apps these days, he's not working on large production systems where you cannot vibe code your way through (not yet at least)

    by tinyhouse - 1 day ago
  • I can't believe I googled most of the dishes on the menu every time I went to the Thai restaurant. I've just realised how painful that was when I saw MenuGen!
    by researchai - 1 day ago
  • Why do non-users of LLM's like to despise/belittle them so much?

    Just don't use them, and, outcompete those who do. Or, use them and outcompete those who don't.

    Belittling/lamenting on any thread about them is not helpful and akin to spam.

    by ukprogrammer - 1 day ago
  • If we extrapolate these points about building tools for AI and letting the AI turn prompts into code I can’t help but reach the conclusion that future programming languages and their runtimes will be heavily influenced by the strengths and weaknesses of LLMs.

    What would the code of an application look like if it was optimized to be efficiently used by LLMs and not humans?

    * While LLMs do heavily tend towards expecting the same inputs/outputs as humans because of the training data I don’t think this would inhibit co-evolution of novel representations of software.

    by blixt - 1 day ago
  • 95% terrible expression of the landscape, 5% neatly dumbed down analogies.

    English is a terrible language for deterministic outcomes in complex/complicated systems. Vibe coders won't understand this until they are 2 years into building the thing.

    LLMs have their merits and he sometimes aludes to them, although it almost feels accidental.

    Also, you don't spend years studying computer science to learn the language/syntax, but rather the concepts and systems, which don't magically disappear with vibe coding.

    This whole direction is a cheeky Trojan horse. A dramatic problem, hidden in a flashy solution, to which a fix will be upsold 3 years from now.

    I'm excited to come back to this comment in 3 years.

    by tudorizer - 1 day ago
  • I know we've had thought leaders in tech before, but am I the only one who is getting a bit fed up by practically anything a handful of people in the AI space say being circulated everywhere in tech spaces at the moment?
    by kypro - 1 day ago
  • okay I’m practicing my new spiel:

    this focus on coding is the wrong level of abstraction

    coding is no longer the problem. the problem is getting the right context to the coding agent. this is much, much harder

    “vibe coding” is the new “horseless carriage”

    the job of the human engineer is “context wrangling”

    by jes5199 - 1 day ago
  • Full playable transcript https://www.appblit.com/scribe?v=LCEmiRjPEtQ
    by ldenoue - 1 day ago
  • It's interesting to see people here and on Blind are more wary? of AI than people in say, Reddit or Youtube comments
    by alightsoul - 1 day ago
  • Generally, people behind big revolutionary tech are the worst suited for understanding how it will do "in the wild". Forest for the trees and all that.

    Some good nuggets in this talk, specifically his concept that Software 1.0, 2.0 and 3.0 will all persist and all have unique use cases. I definitely agree with that. I disagree with his belief that "anyone can vibe code" mindset - this works to a certain level of fidelity ("make an asteroids clone") but what he overlooks is his ability, honed over many years, to precisely document requirements that will translate directly to code that works in an expected way. If you can't write up a Jira epic that covers all bases of a project, you probably can't vibe code something beyond a toy project (or an obvious clone). LLM code falls apart under its own weight without a solid structure, and I don't think that will ever fundamentally change.

    Where we are going next, and a lot of effort is being put behind, is figuring out exactly how to "lengthen the leash" of AI through smart framing, careful context manipulation and structured requests. We obviously can have anyone vibe code a lot further if we abstract different elements into known areas and simply allow LLMs to stitch things together. This would allow much larger projects with a much higher success rate. In other words, I expect an AI Zapier/Yahoo Pipes evolution.

    Lastly, I think his concept of only having AI pushing "under 1000 line PRs" that he carefully reviews is more short-sighted. We are very, very early in learning how to control these big stupid brains. Incrementally, we will define sub-tasks that the AI can take over completely without anyone ever having to look at the code, because the output will always be within an accepted and tested range. The revolution will be at the middleware level.

    by lubujackson - 1 day ago
  • I'm a little surprised at how negative he is towards textual interfaces and text for representing information.
    by raffael_de - 1 day ago
  • It's interesting how researchers are ahead on some insights and introducing them, and it feels like some are new to them but it might already exist and they're helping present them to the world.

    A positive video all around, have got to learn a lot from Andrej's Youtube account.

    LLMs are really strange, I don't know if I've seen a technology where the technology class that applies it (or can verify applicability) has been so separate or unengaged compared to the non-technical people looking to solve problems.

    by j45 - 1 day ago
  • I watched Karpathy's Intro to Large Language Models[0] not so long ago and must say that I'm a bit confused by this presentation, and it's a bit unclear to me what it adds.

    1,5 years ago he saw all the tool uses in agent systems as the future of LLMs, which seemed reasonable to me. There was (and maybe still is) potential for a lot of business cases to be explored, but every system is defined by its boundaries nonetheless. We still don't know all the challenges we face at that boundaries, whether these could be modelled into a virtual space, handled by software, and therefor also potentially AI and businesses.

    Now it all just seems to be analogies and what role LLMs could play in our modern landscape. We should treat LLMs as encapsulated systems of their own ...but sometimes an LLM becomes the operating system, sometimes it's the CPU, sometimes it's the mainframe from the 60s with time-sharing, a big fab complex, or even outright electricity itself?

    He's showing an iOS app, which seems to be, sorry for the dismissive tone, an example for a better looking counter. This demo app was in a presentable state for a demo after a day, and it took him a week to implement Googles OAuth2 stuff. Is that somehow exciting? What was that?

    The only way I could interpret this is that it just shows a big divide we're currently in. LLMs are a final API product for some, but an unoptimized generative software-model with sophisticated-but-opaque algorithms for others. Both are utterly in need for real world use cases - the product side for the fresh training data, and the business side for insights, integrations and shareholder value.

    Am I all of a sudden the one lacking imagination? Is he just slurping the CEO cool aid and still has his investments in OpenAI? Can we at least agree that we're still dealing with software here?

    [0]: https://www.youtube.com/watch?v=zjkBMFhNj_g

    by whilenot-dev - 1 day ago
  • I spent a lot of time thinking about this recently. Ultimately, English is not a clean, deterministic abstraction layer. This isn't to say that LLMs aren't useful, and can create some great efficiencies.
    by wiremine - 1 day ago
  • This DevOps friction is exactly why I'm building an open-source "Firebase for LLMs." The moment you want to add AI to an app, you're forced to build a backend just to securely proxy API calls—you can't expose LLM API keys client-side. So developers who could previously build entire apps backend-free suddenly need servers, key management, rate limiting, logging, deployment... all just to make a single OpenAI call. Anyone else hit this wall? The gap between "AI-first" and "backend-free" development feels very solvable.
    by mkw5053 - 23 hours ago
  • I think this is a brilliant talk and truly captures the "zeitgeist" of our times. He sees the emergent patterns arising as software creation is changing.

    I am writing a hobby app at the moment and I am thinking about its architecture in a new way now. I am making all my model structures comprehensible so that LLMs can see the inside semantics of my app. I merely provide a human friendly GUI over the top to avoid the linear wall-of-text problem you get when you want to do something complex via a chat interface.

    We need to meet LLMs in the middle ground to leverage the best of our contributions - traditional code, partially autonomous AI, and crafted UI/UX.

    Part of, but not all of, programming is "prompting well". It goes along with understanding the imperative aspects, developing a nose for code smells, and the judgement for good UI/UX.

    I find our current times both scary and exciting.

    by magicloop - 23 hours ago
  • Definitely hit this wall too. The backend just for API proxy feels like a detour when all you want is to ship a quick prototype. Would love to see more tools that make this seamless, especially for solo builders.
    by sockboy - 22 hours ago
  • This got me thinking about something…

    Isn’t an LLM basically a program that is impossible to virus scan and therefore can never be safely given access to any capable APIs?

    For example: I’m a nice guy and spend billions on training LLMs. They’re amazing and free and I hand out the actual models for you all to use however you want. But I’ve trained it very heavily on a specific phrase or UUID or some other activation key being a signal to <do bad things, especially if it has console and maybe internet access>. And one day I can just leak that key into the world. Maybe it’s in spam, or on social media, etc.

    How does the community detect that this exists in the model? Ie. How does the community virus scan the LLM for this behaviour?

    by Waterluvian - 21 hours ago
  • big companies still already lay off
    by fHr - 21 hours ago
  • The image of a bunch of children in a room gleefully playing with their computers is horror movie type stuff, but because it's in a white room with plants and not their parent's basement with the lights off, it's somehow a wonderful future.

    Karpathy and his peer group are some of the most elitist and anti social people who have ever lived. I wonder how history will remember them.

    by old_man_cato - 20 hours ago
  • https://github.com/screencam/typescript-mcp-server

    I've been working on this project. I built this in about two days, using it to build itself at the tail end of the effort. It's not perfect, but I see the promise in it. It stops the thrashing the LLMs can do when they're looking for types or trying to resolve anything like that.

    by johnwheeler - 20 hours ago
  • I'm not sure about the 1.0/2.0/3.0 classification, but it did lead me to think about LLMs as a programming paradigm: we've had imperative & declarative, procedural & functional languages, maybe we'll come to view deterministic vs. probabilistic (LLMs) similarly.

        def __main__:
            You are a calculator. Given an input expression, you compute the result and print it to stdout, exiting 0.
            Should you be unable to do this, you print an explanation to stderr and exit 1.
    
    (and then, perhaps, a bunch of 'DO NOT express amusement when the result is 5318008', etc.)
    by OJFord - 18 hours ago
  • https://software3.com/index.htm

    Amazing!!!

    by goosebump - 17 hours ago
  • Where are the debugging tools for the so called "Software 3.0" ?
    by ankurdhama - 16 hours ago
  • I can't stop thinking about these agents as Agent Smith, The Architect, etc.
    by taegee - 12 hours ago
  • why there are so many bots posting comments?
    by himanshuy - 7 hours ago
  • I honestly like his perspective around vibe coding. I feel like his original tweet has been taken misunderstood by the mainstream. (Proof-of-concepts churned out over the weekend will usually die or be mostly rewritten, anyways.) For programmers dipping their feet into new areas, I believe it can be useful.

    Though, I do not see it being useful as a "gateway drug" (as he says) for kids learning to code. I have seen that children can understand langs and base programming concepts, given the right resources and encouragement. If kids in the 80s/early 90s learned BASIC and grew up to become software engineers; then what we have now (Scratch, Python, even Javascript + something like P5) are perfectly adequate to that task. Vibe coding really just teaches kids how to prompt LLMs properly.

    by kdrvr - 31 minutes ago

© 2025 Birbla.com, a Hacker News reader · Content · Terms · Privacy · Support