- Are there some people here in HN believing in AGI "soonish" ?by bestouff - 19 hours ago
- > "This is purely an observation: You only jump ship in the middle of a conquest if either all ships are arriving at the same time (unlikely) or neither is arriving at all. This means that no AI lab is close to AGI."by A_D_E_P_T - 19 hours ago
The central claim here is illogical.
The way I see it, if you believe that AGI is imminent, and if your personal efforts are not entirely crucial to bringing AGI about (just about all engineers are in this category), and if you believe that AGI will obviate most forms of computer-related work, your best move is to do whatever is most profitable in the near-term.
If you make $500k/year, and Meta is offering you $10M/year, then you ought to take the new job. Hoard money, true believer. Then, when AGI hits, you'll be in a better personal position.
Essentially, the author's core assumption is that working for a lower salary at a company that may develop AGI is preferable to working for a much higher salary at a company that may develop AGI. I don't see how that makes any sense.
- Maybe I'm too jaded, I expect all this nonsense. It's human beings doing all this, after all. We ain't the most mature crowd...by bsenftner - 19 hours ago
- Also, AGI is not just around the corner. We need artificial comprehension for that, and we don't even have a theory how comprehension works. Comprehension is the fusing of separate elements into new functional wholes, dynamically abstracting observations, evaluating them for plausibility, and reconstituting the whole - and all instantaneously, for security purposes, of every sense constantly. We have no technology that approaches that.by bsenftner - 19 hours ago
- Observe what the AI companies are doing, not what they are saying. If they would expect to achieve AGI soon, their behaviour would be completely different. Why bother developing chatbots or doing sales, when you will be operating AGI in a few short years? Surely, all resources should go towards that goal, as it is supposed to usher the humanity into a new prosperous age (somehow).by empiko - 19 hours ago
- Are we finally realizing that the term "AGI" is not only hijacked to become meaningless, but achieving it has always been nothing but a complete scam as I was saying before? [0]by rvz - 19 hours ago
If you were in a "pioneering" AI lab that claims to be in the lead in achieving "AGI", why move to another lab that is behind other than offering $10M a year.
Snap out of the "AGI" BS.
- I never trusted them from the start. I remember the hype that came out of Sun when J2EE/EJBs appeared. Their hype documents said the future of programming was buying EJBs from vendors and wiring them together. AI is of course a much bigger hype machine with massive investments that need to be justified somehow. AI is a useful tool (sometimes) but not a revolution. ML is much more useful a tool. AGI is a pipe dream fantasy pushed to make it seem like AI will change everything, as if AI is like the discovery that making fire was.by coldcode - 19 hours ago
- I love how much the proponents is this tech are starting to sound like the opponents.by conartist6 - 19 hours ago
What I can't figure out is why this author thinks it's good if these companies do invent a real AGI...
- Im reading the "AI"-industry as a totally different bet- not so much, as a "AGI" is coming bet of many companies, but a "climate change collapse" is coming and we want to continue to be in business, even if our workers stay at home/flee or die, the infrastructure partially collapses and our central office burns to the ground-bet. In that regard, even the "AI" we have today, makes total sense as a insurance policy.by PicassoCTs - 18 hours ago
- > Right before “making tons of money to redistribute to all of humanity through AGI,” there’s another step, which is making tons of money.by davidcbc - 18 hours ago
I've got some bad news for the author if they think AGI will be used to benefit all of humanity instead of the handful of billionaires that will control it.
- AGI might be a technological breakthrough, but what would be the business case for it? Is there one?by Findecanor - 18 hours ago
So far I have only seen it been thrown around to create hype.
- Honestly this article sounds like someone is unhappy that AI isn’t being deployed/developed “the way I feel it should be done”.by lherron - 17 hours ago
Talent changing companies is bad. Companies making money to pay for the next training run is bad. Consumers getting products they want is bad.
In the author’s view, AI should be advanced in a research lab by altruistic researchers and given directly to other altruistic researchers to advance humanity. It definitely shouldn’t be used by us common folk for fun and personal productivity.
- > This is purely an observation: You only jump ship in the middle of a conquest if either all ships are arriving at the same time (unlikely) or neither is arriving at all. This means that no AI lab is close to AGI. Their stated AGI timelines are “at the latest, in a few years,” but their revealed timelines are “it’ll happen at some indefinite time in the future.”by computerphage - 16 hours ago
This makes no sense to me at all. Is it a war metaphor? A race? Why is there no reason to jump ship? Doesn't it make sense to try to get on the fastest ship? Doesn't it make sense to diversify your stock portfolio if you have doubts?
- I keep seeing this charge that AI companies have an “Uber problem” meaning the business is heavily subsidized by VC. Is there any analysis that has been done that explains how this breaks down (training vs inference and what current pricing is)? At least with Uber you had a cab fare as a benchmark. But what should, for example, ChatGPT actually cost me per month without the VC subsidy? How far off are we?by JunkDNA - 16 hours ago
- No-one authentically believes LLMs with whatever go-faster stripes are a path to AGI do they?by 4ndrewl - 16 hours ago
- Very funny to re-title this to something less critical.by almostdeadguy - 16 hours ago
- Point 1. could just as easily be explained by all of the labs being very close, and wanting to jump ship to one that is closer, or that gives you a better deal.by NickNaraghi - 15 hours ago
- > This reminds me of a paradox: The AI industry is concerned with the alignment problem (how to make a super smart AI adhere to human values and goals) while failing to align between and within organizations and with the broader world. The bar they’ve set for themselves is simply too high for the performance they’re putting out.by hamburga - 15 hours ago
My argument is that it’s our job as consumers to align the AIs to our values (which are not all the same) via selection pressure: https://muldoon.cloud/2025/05/22/alignment.html
- > The AI industry oscillates between fear-mongering and utopianism. In that dichotomy is hidden a subtle manipulation. […] They don’t realize that panic doesn’t prepare society but paralyzes it instead, or that optimism doesn’t reassure people but feels like gaslighting. Worst of all, both messages serve the same function: to justify accelerating AI deployment—either for safety reasons or for capability reasonsby joshdavham - 14 hours ago
This is a great point and also something I’ve become a bit cynical about these last couple of months. I think the very extreme and “bipolar” messaging around AI might be a bit more dishonest than I originally (perhaps naively?) though.
- >If they truly believed we’re at most five years from world-transforming AI, they wouldn’t be switching jobs, no matter how large the pay bump (they’re already affluent).by ninetyninenine - 14 hours ago
What ridiculous logic is this? TO base the entire premise that AGI is not imminent based on job switching? How about basing it on something more concrete.
How do people come up with such shakey foundations to support their conclusions? It's obvious. They come up with the conclusion first then they find whatever they can to support it. Unfortunately if dubious logic is all that's available then that's what they will say.
- The author sounds like some generic knock-off version of Gary Marcus. And the thing we least need in this world is another Gary Marcus.by hexage1814 - 14 hours ago
- The primary use case for AI-in-the-box is a superhuman CEO that sees everything and makes no mistakes. As an investor you can be sure that your money are multiplying at the highest rate possible. However as a self-serving investor you also want your CEO to side-step any laws and ethics that stand in your way, unless ignoring those laws will bring more trouble than profit. All that while maintaining a facade of selfless philanthropist for the public. For a reasonable price, your AI CEO will be fine-tuned to serve your goals perfectly.by akomtu - 12 hours ago
Remember that fine-tuning a well-behaved AI to do something as simple as writing malware in C++ makes widespread changes in the AI and turns it into a monstrosity. There was an HN post about this recently: fine-tuning an aligned model produces broadly misaligned results. So what do you think will happen when our AI CEO gets fine-tuned to prioritize shareholder interests over public interests?
- My question is this - once you achieve AGI, what moat do you have, purely on the scientific part? Other than making the AGI even more intelligent.by TrackerFF - 12 hours ago
I see a lot of talk that the first company that achieves AGI, will also achieve market dominance. All other players will crumble. But surely when someone achieves AGI, their competitors will in all likelihood be following closely after. And once those achieve AGI, academia will follow.
Point is, at some point AGI itself will become available the everyone. The only things that will be out of reach for most, is compute - and probably other expensive things on the infrastructure part.
Current AI funding seems to revolve around some sort of winner-take-all scenario. Just keep throwing incredible amounts of money at it, and hope that you've picked the winner. I'm just wondering what the outcome will be if this thesis turns out wrong.
- "A disturbing amount of effort goes into making AI tools engaging rather than useful or productive."by Animats - 11 hours ago
Right. It worked for social media monetization.
"... hallucinations ..."
The elephant in the room. Until that problem is solved. AI systems can't be trusted to do anything on their own. The solution the AI industry has settled on is to make hallucinations an externality, like pollution. They're fine as long as someone else pays for the mistakes.
LLMs have a similar problem to Level 2-3 self-driving cars. They sort of do the right thing, but a human has to be poised to quickly take over at all times. It took Waymo a decade to get over that hump and reach level 4, but they did it.
- Thanks for the read. I think it's a highly relevant article, especially around the moral issues of making addictive products. As a normal person in the Swedish society I feel social media, shorts and reels in particular, has an addictive grip on many in my vicinity.by lightbulbish - 11 hours ago
And as a developer I can see similar patterns with AI prompts: prompt, wait, win/lose, re-prompt. It is alluring and it certainly feels.. rewarding when you get it right.
1) I have been curious as to why so few people in Silicon Valley seems to be concerned with, even talking about, the good of the products. The good of the company they join. Could someone in the industry enlighten me, what are the conversations in SV around this issue? Do people care if they make an addictive product which seems to impact people's lives negatively? Do the VCs?
2) I appreciate the author's efforts in creating conversation around this. What are ways one could try to help the efforts? While I have no online following, I feel rather doomy and gloomy about AI pushing more addictive usage patterns out in to the world, and would like to help if there is something suitable I could do.
- I can't speak intelligently about how close AGI really is (I do not believe it is but I guess someone somehow somewhere might come up with a brilliant idea that nobody thought of so far and voila).by drillsteps5 - 9 hours ago
However I'm flabbergasted by the lack of attention to so-called "hallucinations" (which is a misleading, I mean marketing, term and we should be talking about errors or inaccuracies).
The problem is that we don't really know why LLMs work. I mean you can run the inference and apply the formula and get output from the given input, but you can't "explain" why LLM produced phase A as an output instead of B,C, or N. There's just too many parameters and computations to go though, and the very concept of "explaining" or "understanding" might not even apply here.
And if we can't understand how this thing works, we can't understand why it doesn't work properly (produces wrong output) and also don't know how to fix it.
And instead of talking about it and trying to find a solution everybody moved on to the agents which are basically LLMs that are empowered to perform complex actions IRL.
How does this makes any sense to anybody? I feel like I'm crazy or missing something important.
I get it, a lot of people are making a lot of money and a lot of promises are being made. But this is absolutely fundamental issue that is not that difficult to understand to anybody with a working brain, and yet I am really not seeing any attention paid to it whatsoever.
- I can at least understand "I am going to a different AGI company because I think they are on a better track" but I cannot grap " I am leaving this AGI company to work on some narrow AI application but I still totally believe AGI is right around the corner"by Imnimo - 9 hours ago
- AI is the new politics.by DavidPiper - 5 hours ago
It's surprising to me the number of people I consider smart and deep original thinkers who are now parroting lines and ideas (almost word-for-word) from folks like Andrej Karpathy and Sam Altman, etc.
But, of course, "Show me the incentive and I will show you the outcome" never stops being relevant.
- > A disturbing amount of effort goes into making AI tools engaging rather than useful or productive. I don't think this is an intentional design decision.by insane_dreamer - 2 hours ago
I think it absolutely is intentional. The overt flattery of LLMs is designed to keep you coming back because everyone wants to hear how smart they are.
- Since no one has any idea of how to achieve AGI or the process to get there, I'm skeptical of any claims as to how soon we might arrive.by insane_dreamer - 2 hours ago