If you want to watch the video version of this, you can find it on [youtube](https://youtu.be/pmtuMJDjh5A?si=MAppqZ0N8S84VAGo](https://youtu.be/pmtuMJDjh5A?si=MAppqZ0N8S84VAGo "https://youtu.be/pmtuMJDjh5A?si=MAppqZ0N8S84VAGo") - The notes from the video are here: [[LLMNOP - AI Risks]] - The video is more polished, I have not gone back and updated my thoughts and communication in this post # Introduction I'm not hopeless, cynic, AI doomer - there are just some complicated question I'm not seeing being addressed. So, let's try and talk about it! First - yes, I use AI. I use it multiple times every day. I use it for work stuff and for personal life things. I've tried out a bunch of the different tools and will continue to do so. I want to stay informed and have actual first hand opinions. Yet, that does not mean I think AI is risk-free. As with literally every other technologies, it's a two-edged sword. Let's get to one of the edges pointing back at us. Risk 1: Whatever LLMs default answers become the de facto (or potentially, exclusive) solutions for problems. If I ask: "Make me a todo list app" - Obviously it's going to be React. I tried asking for a "cool and modern todo list app" in claude and it implemented (a broken) version in React. - It didn't do a server side rendered app in Rails - It didn't create a new MVC in Phoenix - It just made me a (broken) React application. Which to be honest, is probably the most human thing it could have done.... Anyway! I think the initial problem is quite clear. You use an LLM for an area you are not super familiar with. You accept the results, because they seem fine to you. And you move on. Great. You're productive. The LLM host is happy because you sent them money (apparently not enough to cover the 5 billion in OpenAI losses this year, but what can you do). And then you go on to the next problem... However, this risk is only going to be exacerbated by the "natural language programmer". - If the majority of programmers are the predicted "natural language programmer", or even we solve the majority of our problems via natural language, it seems unlikely that they'll be suggesting particular technical details - In fact, from what I can tell, this is the desired outcome. To speak less technically and less formally (in the mathematical sense) and achieve better results faster and easier. - My expectation is that the prompt will be (37,000 lines of prompt boiler plate) and then: "Make me a site that is fast, has blue buttons and gives information about my plumbing business" or "Make me a facebook alternative that loads fast" or whatever it is. - My concern is that for every unspecified detail the model will suggest the next most likely technology... which will almost always be the same. - How will new programming languages, libraries, or frameworks gain adoption? - How would a new cloud service compete against the tech giants - will it be near impossible to even get people to consider you based on cost, convenience or reliability if the programmers don't even know that their services run on a computer somewhere else? Or never even made the choice to have it hosted anywhere in the first place? The reason I bring up the natural language programmer is that it is the pitch from so many online today that it's completely useless to choose to learn things about computer science/programming/programming languages/you name it! That's all just going to be completely solved by the magical boxes. Economics, incentives and reality? Those won't apply to the benevolent people running my truthful LLM (or series of mostly truthful LLMs!) > LLMs are not truth seeking machines - I'm not even sure if it is possible to have such a thing as a truth seeking machine... we'll have to cover that in a different video though. But even outside of this - even if we don't end up with forced defaults... if AI gives us a 100x speedup on programming (or models are able to write code completely by themselves), it seems it might be quite difficult to get AIs working differently than the already prescribed path. If the AIs always write typescript 10x better than any other language, does that mean we will be stuck with typescript forever? If the AIs continue to be training on 50x more typescript than other languagess, will the models properly understand other ecosystems, libraries, frameworks, etc? Or will it be incumbents win forever? If you think this isn't a risk, ask how ACH works at banks or why COBOL still has users. - I think at the individual level, this is probably more of a risk than at the community level for software development - We have not yet explored deeply how far AI can push writing code, how to encode the ways AI write code and how to get better guarantees about safety in systems. - I'm not sure how a single person would get a new language adopted - but I also am still amazed when that happens today. So maybe this risk is primarily related to my lack of imagination. However, my imagination does not have to work too hard when considering the next risk (which is influenced by this idea of people blindly accepting the first answer from an LLM as "truth" and "the best option" - since the PhD level intelligence suggested it). Risk 2: Companies will perform SEO for LLMs, creating the LLM Nascent Oligopoly of Products - or LLMNOP for short. - These will be secret biases that are effectively undeclared ads that are included in: - LLM training data - pre-training data - reinforcement learning from human feedback - prompts - covert, but explicit, programming to remove suggestions from the results (either via LLM passes or the true AI: regexes and if statements) - Doesn't even have to be malicious! - If 1,000x resources exist about generating a React application, how likely is the LLM to suggest a Svelte app? Should it even do so if it's "Deploy this to the cloud" - VS C\*de automatically installs the azure client for you and auto imports your code, and suddenly - yes you're deployed, but you're deployed in a completely captured vertical by Microsoft. All without knowing. - And I don't think this is going to be exclusive to Microsoft. I'm sure Apple, Google, everyone is going to want this - and why wouldn't they? Who doesn't want customers! As a quick aside, don't get this mistaken for a "muh capitalism" complaint. This risk is actually greatly increased due to government intervention, and is particularly pertinent when/if companies are able to achieve regulatory capture over certain aspects of LLM (training, RLHF, etc) - I think it will make it particularly tricky for open source models to be able to compete if it is illegal to run them. But this extends way beyond programming and technology choices - what about for brand names of anything! Well even if AI takes our jobs, you could still write WoW mods in Lua. And if you want to learn how to do that, you might be interested to know that I'm creating a course for boot.dev/teej that teaches you Lua :) It will be out sometime this year. I'm not going to make any more specific promise than that. Enjoy 25% with promo code TEEJ on your first purchase.