The questions are excellent, the answers are "state of the art", the latter is not a compliment in this case. Here is a different take on the graph part.
- You have two fundamentally different ways to transfer and curate knowledge, A: storytelling (very human, imprecise) or B: knowledge graph building (hard for a human, as precise as can be). 👉 JCR Licklider, Libraries of the Future (1965, book).
- STEM knowledge is always B: a graph. When you have a problem in physics, biology, math, medicine, ... it's NOT about how you sing it or what language you use, but to build a precise network of property pages filled with data and linked to each other. The very terms (labels of data and links) are also graphs (DSLs). Information systems are graphs, too. In the computer's memory, you have flowcharts of the algorithms and you use the memory to hold the content of those property sheets. 👉 Ivan Sutherland Sketchpad (YouTube)
- EVERY program is a combination of a DSL set (the "meta" layer of classes, members, functions and their parameters) and a bunch of stories (the code of the functions and procedures), quoting Bob Martin: the software is assignment statements, if statements and while loops 👉 Future of Programming (YouTube).
- The real problem is that the text-based tools make us focus on the storytelling, we only see the big list of features or use cases, instead of the DSLs that allow us to describe and solve the atomic problems (modularity, KISS, DRY, SRP. ...). We have already solved every possible atomic problems literally millions of times, but repeating them every time (copy-paste or in "modern cases", LLMs🤦♂) in the endless possible combinations, and that grows every day. 👉 Alan Kay's Power of Simplicity (YouTube)
- Introducing new programming languages that do the same has one effect: it even erodes the "burden" of cumulated human experience and the codebase, start it all over again, not solving the fundamental issue.
Problem: Information systems are graphs. Storytelling is not the right way to interact with graphs. The blamed imprecision is manageable "human error" in the case of graphs but inevitable fatal block in text-based programming. Teaser 👉 Bret Victor, Future of Programming (YouTube); hard-core answer 👉 Douglas Engelbart: Augmenting Human Intellect (report)
Solution: STEM languages are graphs (DSLs). THE future programming language is the DSL of information systems. The same that we have in physics, mathematics, biology, ...
Question: has anyone read this far? 😉
---[ discussion under a comment ]---
CallousCoder
Your behaviour is like the old horse and cart people against the automobile. It's nonsensical, the technology is here to stay. So either adopt it or you'll go extinct. You know that good developers are terrible managers, right? ;) Also I don't get it, what the resistance is. Whether you ask a junior or medior to implement something or you ask an LLM. It's no different other than that the LLM just does it and doesn't nag especially after the 2nd or 3rd iteration ;)
“bro” is 52 years old and didn’t take philosophy but EE and CS.
lkedves
Age is just a number (happens to be the same...) Check the Mother of All Demos, that was real technology behind the Apollo program, while this chatbot AI is just another stock market bubble. Side note: before the previous AI winter, we won Comdex '99 with a data mining / AI tool. Back then people could read the first paragraph of Turing's article, the definition of "the test" instead of trying to implement cartoon dreams... (including a Nobel prize winner psychologist)
But you got the point with "Modern software is a disease!" LLMs learn from their sources, kids copy-paste them to real software, LLM learns from them again. Quantity goes up, quality goes down. LLM companies use human slaves to avoid stupid mistakes in everyday tasks after the first flops. Regardless of we accept this as a solution, who will censor generated codes?
Dead end.
cyberfunk3793
AI is obviously going to fix data races and buffer overflows and every other type of bugs you can think of. You don't understand what is coming if you think it's just hype. I don't know if it will be 5 years or 50, but at some point humans will only be describing (in human language) what they want the program to do and reviewing the code that is produced. Currently AI is already extremely helpful but still makes a lot mistakes. These mistakes will get more and more rare and the ability of AI to program will far exceed humans just like computers beat us at chess.
TCMx3
Chess engines did not need AI to curbstomp us at chess. Non-ML based engines with simple table-bases for endings were already some 700 points stronger than the best humans. Sounds like you don't actually know very much about chess engines lmao.
CallousCoder
btw playing chess with an LLM is a hilarious experience. If it looses it brings back pieces from the dead or just “portals” them into safety.
lkedves
[retry, I promise I leave if disappears again]
You may have missed so I repeat. We won Comdex '99 with a data mining / AI tool (and there is nothing new on this field except the exponential growth of the hardware). Since then I have worked on refining knowledge graph management in information systems under every single project I touched, often delivering "impossible missions". I work together with the machine because I follow a different resolution of AI: Augmenting Intellect (Douglas Engelbart) on systems that are of course smarter than me (and generate part of their own codes for years from these graphs). Right now at the national AI lab in a university applied research project that I will not try to explain here.
You find some of my conclusions with references to sources in my comment added to this video (11th May).
I know the pioneers who predicted and warned about what we have today (recommended reading: Tools For Thought by Howard Rheingold, you find the whole book online). One of them is Alan Turing, who asked people not to call the UTM a "thinking machine" and wrote an article in Mind: A Quarterly Review of Psychology and Philosophy about the dangers of making such claims without proper definitions. Poor man never thought that in a few decades "IT folks" would think this was an aim. Or Joseph Weizenbaum, the guy who wrote the first chatbot, ELIZA.
I know why your dream will never happen because I know informatics (its original meaning, not the business model Gates invented) was against this fairy tale. LLMs are just try to prove that old story from quantum mechanics, that infinite monkeys in infinite time will surely type in the whole Hamlet. The problem is that we don't have infinite time and resources, and the goal is not repeating but write the next Hamlet. Those who initiated informatics, made this clear. Start with Vannevar Bush: As We May Think, 1945.
@TCMx3 , @CallousCoder - thanks for your answers... 🙏 Another excellent example - in chess, you have absolute rules.
In life, we know that all the laws we can invent are wrong (Incompleteness theorem) and thinking means improving the rules while solving problems and taking the responsibility of all errors. The ultimate example is the Apollo program with Engelbart's NLS in the background, that's how THEY went to the moon. We go to the plaza to watch the next Marvel story in 4D, now with the help of genAI. If we go with the question of predicting the next 50 years, look up "Charly Gordon Algernon 1968" here on YouTube.
---[ This answer "disappeared" for the second time so I left the place ]---