Ha elég messzire jutottál, a megoldás a hátad mögött van... :-)
Respektu Tempon. Tiszteld az Időt / Az Időt tiszteld.
International readers, please use the english tag to get a first impression, thank you.
The questions are excellent, the answers are "state of the art", the latter is not a compliment in this case. Here is a different take on the graph part.
You have two fundamentally different ways to transfer and curate knowledge, A: storytelling (very human, imprecise) or B: knowledge graph building (hard for a human, as precise as can be). 👉 JCR Licklider, Libraries of the Future (1965, book).
STEM knowledge is always B: a graph. When you have a problem in physics, biology, math, medicine, ... it's NOT about how you sing it or what language you use, but to build a precise network of property pages filled with data and linked to each other. The very terms (labels of data and links) are also graphs (DSLs). Information systems are graphs, too. In the computer's memory, you have flowcharts of the algorithms and you use the memory to hold the content of those property sheets. 👉 Ivan Sutherland Sketchpad (YouTube)
EVERY program is a combination of a DSL set (the "meta" layer of classes, members, functions and their parameters) and a bunch of stories (the code of the functions and procedures), quoting Bob Martin: the software is assignment statements, if statements and while loops 👉 Future of Programming (YouTube).
The real problem is that the text-based tools make us focus on the storytelling, we only see the big list of features or use cases, instead of the DSLs that allow us to describe and solve the atomic problems (modularity, KISS, DRY, SRP. ...). We have already solved every possible atomic problems literally millions of times, but repeating them every time (copy-paste or in "modern cases", LLMs🤦♂) in the endless possible combinations, and that grows every day. 👉 Alan Kay's Power of Simplicity (YouTube)
Introducing new programming languages that do the same has one effect: it even erodes the "burden" of cumulated human experience and the codebase, start it all over again, not solving the fundamental issue.
Problem: Information systems are graphs. Storytelling is not the right way to interact with graphs. The blamed imprecision is manageable "human error" in the case of graphs but inevitable fatal block in text-based programming. Teaser 👉 Bret Victor, Future of Programming (YouTube); hard-core answer 👉 Douglas Engelbart: Augmenting Human Intellect (report)
Solution: STEM languages are graphs (DSLs). THE future programming language is the DSL of information systems. The same that we have in physics, mathematics, biology, ...
CallousCoder Your behaviour is like the old horse and cart people against the automobile. It's nonsensical, the technology is here to stay. So either adopt it or you'll go extinct. You know that good developers are terrible managers, right? ;) Also I don't get it, what the resistance is. Whether you ask a junior or medior to implement something or you ask an LLM. It's no different other than that the LLM just does it and doesn't nag especially after the 2nd or 3rd iteration ;)
CallousCoder “bro” is 52 years old and didn’t take philosophy but EE and CS.
lkedves Age is just a number (happens to be the same...) Check the Mother of All Demos, that was real technology behind the Apollo program, while this chatbot AI is just another stock market bubble. Side note: before the previous AI winter, we won Comdex '99 with a data mining / AI tool. Back then people could read the first paragraph of Turing's article, the definition of "the test" instead of trying to implement cartoon dreams... (including a Nobel prize winner psychologist)
But you got the point with "Modern software is a disease!" LLMs learn from their sources, kids copy-paste them to real software, LLM learns from them again. Quantity goes up, quality goes down. LLM companies use human slaves to avoid stupid mistakes in everyday tasks after the first flops. Regardless of we accept this as a solution, who will censor generated codes?
Dead end.
cyberfunk3793 AI is obviously going to fix data races and buffer overflows and every other type of bugs you can think of. You don't understand what is coming if you think it's just hype. I don't know if it will be 5 years or 50, but at some point humans will only be describing (in human language) what they want the program to do and reviewing the code that is produced. Currently AI is already extremely helpful but still makes a lot mistakes. These mistakes will get more and more rare and the ability of AI to program will far exceed humans just like computers beat us at chess.
TCMx3 Chess engines did not need AI to curbstomp us at chess. Non-ML based engines with simple table-bases for endings were already some 700 points stronger than the best humans. Sounds like you don't actually know very much about chess engines lmao.
CallousCoder btw playing chess with an LLM is a hilarious experience. If it looses it brings back pieces from the dead or just “portals” them into safety.
lkedves [retry, I promise I leave if disappears again]
You may have missed so I repeat. We won Comdex '99 with a data mining / AI tool (and there is nothing new on this field except the exponential growth of the hardware). Since then I have worked on refining knowledge graph management in information systems under every single project I touched, often delivering "impossible missions". I work together with the machine because I follow a different resolution of AI: Augmenting Intellect (Douglas Engelbart) on systems that are of course smarter than me (and generate part of their own codes for years from these graphs). Right now at the national AI lab in a university applied research project that I will not try to explain here. You find some of my conclusions with references to sources in my comment added to this video (11th May).
I know the pioneers who predicted and warned about what we have today (recommended reading: Tools For Thought by Howard Rheingold, you find the whole book online). One of them is Alan Turing, who asked people not to call the UTM a "thinking machine" and wrote an article in Mind: A Quarterly Review of Psychology and Philosophy about the dangers of making such claims without proper definitions. Poor man never thought that in a few decades "IT folks" would think this was an aim. Or Joseph Weizenbaum, the guy who wrote the first chatbot, ELIZA.
I know why your dream will never happen because I know informatics (its original meaning, not the business model Gates invented) was against this fairy tale. LLMs are just try to prove that old story from quantum mechanics, that infinite monkeys in infinite time will surely type in the whole Hamlet. The problem is that we don't have infinite time and resources, and the goal is not repeating but write the next Hamlet. Those who initiated informatics, made this clear. Start with Vannevar Bush: As We May Think, 1945.
@TCMx3 , @CallousCoder - thanks for your answers... 🙏 Another excellent example - in chess, you have absolute rules.
In life, we know that all the laws we can invent are wrong (Incompleteness theorem) and thinking means improving the rules while solving problems and taking the responsibility of all errors. The ultimate example is the Apollo program with Engelbart's NLS in the background, that's how THEY went to the moon. We go to the plaza to watch the next Marvel story in 4D, now with the help of genAI. If we go with the question of predicting the next 50 years, look up "Charly Gordon Algernon 1968" here on YouTube.
---[ This answer "disappeared" for the second time so I left the place ]---
This looks like a solid overview of the current "state of the art". What I don't see is the background, 𝐭𝐡𝐞 𝐫𝐞𝐚𝐥 𝐠𝐢𝐚𝐧𝐭𝐬 𝐨𝐧 𝐰𝐡𝐨𝐬𝐞 𝐬𝐡𝐨𝐮𝐥𝐝𝐞𝐫𝐬 𝐰𝐞 𝐚𝐥𝐥 𝐚𝐫𝐞 𝐬𝐭𝐚𝐧𝐝𝐢𝐧𝐠 (and were afraid to look down so now we fall like a stone, just as predicted -> https://youtu.be/KZqsWGtdqiA?t=102 ).
Now, please stop reading and remember when you asked this, not just lightly but with the strange mix of real anger and shame.
...
That was the last time you learned something really important and this is the only way to it. It happened to me yesterday. At age 52 I think this is a very positive feedback: I can (very hardly but still) lower my ego and learn. Here is the story.
I have been repeating for years in every context that JCR Licklider separated transferrable and non-transferrable knowledge, and a root cause of today's mess in IT (and consequently, everywhere) is the fact that we forgot this. Banging my chest like a gorilla, like here...
But yesterday, as my young colleague is creating a proper scientific publication, we started looking for the exact reference. To my greatest surprise, I did not find it. In desperation, I started reading Libraries of the Future again, and realized (thankfully and ironically on page 2!) that...
Licklider never wrote that.
Here is the actual quote:
We delimited the scope of the study, almost at the outset, to functions, classes of information, and domains of knowledge in which the items of basic interest are not the print or paper, and not the words and sentences themselves —but the facts, concepts, principles, and ideas that lie behind the visible and tangible aspects of documents. The criterion question for the delimitation was: "Can it be rephrased without significant loss?" Thus we delimited the scope to include only "transformable information." Works of art are clearly beyond that scope, for they suffer even from reproduction. Works of literature are beyond it also, though not as far. Within the scope lie secondary parts of art and literature, most of history, medicine, and law, and almost all of science, technology, and the records of business and government.
He talks about "transformable information", not "transferrable knowledge".
What happened? Had I forgotten to read???
No, but I was not able to at that moment. This paragraph held a key to a question of computerized knowledge management I struggled with for decades, literally. When it hit me, my mind was blown immediately and started restructuring itself. I followed, remembered, and kept quoting my own revelation instead of the text that I thought I was reading.
But why?
Knowledge in our minds is always a network: some attributes of and relations between "things". To store or transfer our knowledge of a topic, we "export" the related part of this network in a presentation, text, figures, pictures, videos. Other people will try to integrate this content with their existing knowledge. Here comes the trick: for those who can do this without changing anything in their minds, this was not "information" because information is only the part that you did not know and could not figure out from your existing knowledge.
At the first time I could not integrate Licklider's original message with my existing knowledge, it only triggered a change that took a long time. Now, when I revisited this paragraph, it was new again but now I could actually read it and integrate with my current knowledge. Fun fact: the word "respect" does not mean "obey" or "accept" but re-specto: examine it again.
Real information is like good chilli: it burns twice.
So, how do I read the message now?
This is a simple way to tell the difference between transformable and non-transformable information. I quoted the rest correctly: informatics (the "libraries of the future") should work only with transformable information.
Transformable means you can say it hundreds of ways, the meaning will be the same. You focus on the knowledge graph in your head and try to build exactly the same in the audience: a physical phenomenon or a medical treatment.
Non-transformable information focuses on the message itself and the feelings created by it (not less important but totally different). With different tone, wording, or face, the message and the effect significantly changes.
A less nerdy example
I think 99% of modern pop music is not even information: repeats the same message about a boy, girl, love, hate, etc. that the audience is already familiar. (hashtag metoo?)
But the Sound of Silence is a perfect example of non-transformable information: I already knew the original song but this presentation by Disturbed delivered the message (which happens to be in close relation with the topic of this post).
I have been saying for years and years that more business professionals and liberal arts majors should be paying attention to artificial intelligence. Let me ask a question. When, or if, the tools that these professional computer scientists create go terribly wrong, should the computer scientists be held accountable in any way? If someone relies on these tools based on something some marketing campaign for artificial intelligence proclaimed, who will be held responsible? Buyer beware?
There is one, similarly dangerous aspect of this question. Assuming that there are no "IT people" who are (with the necessary formal education, knowledge and experience, much more) worried about this situation thus outsiders should enforce discipline on us.
Listen to Bob Martin not only pointing out the core issues but giving an explanation and possible cure for them as well. The problem is that this is too technical for the outsiders and absolutely not popular for the vast majority of self-proclaimed IT people who happened to get old and established without ever receiving proper education. So, they teach the next generation of their cargo cults, blockchain or mainstream AI being the newest ones. [edit: ouch, forgot about the IT cult leaders who first got insanely rich, then start "changing the world for the better", making fame along the way and attract their followers to continue their "heritage"... 🤦♂️ ] https://youtu.be/ecIWPzGEbFc?t=3057
I see nothing special in IT. We live in the predicted global Idiocracy and IT is not immune to it.
Yes, you are "talking about" it while I can list you goals and names of the true IT pioneers, the best minds of the planet. They knew that any technology is exactly as dangerous as beneficial. The difference is that others change what you can DO in the real world, informatics changes what you SEE and THINK of it! Today we handle their warnings as a damned bucket list and of course, that is ignored as a CS PhD research topic. https://bit.ly/montru_ScienceWrong
The roots were cut when IT became a for-profit venture funded by general business (M$) and rich daydreamers (Apple). I think you will like the ultimate arts person, Neil Postman, trying to educate Apple ("think different" 🤦♂️) folks in 1993... https://youtu.be/QqxgCoHv_aE
This is all interesting. One can compare this to things like the creation of "high fructuous corn syrup" and it effect on the food industry and people's health, mining techniques that destroy the environment, the way healthcare is practiced in the United States, the way the pharmaceuticals industry works to get people to pay pills for the rest of their lives.
Only if you ignore the other side of the coin. Following your analogy, statistically speaking the goods you find in a pharmacy are either useless or outright dangerous, even lethal - yet, we need pharmacies and have been cured by the drugs they sell.
How comes?
Although a pharmacy looks like a shop, it MUST NOT give you what you ask, only the drugs your doctor prescribed after a careful examination, regardless of the money you offer. Theoretically... 🙁 But today we try to operate the pharmacy just like a bakery or a candy store: want to get more profit by giving you whatever you ask, even create marketing campaign, etc. (like the rest of the healthcare system btw.)
So, do you "rightfully" blame the pharmacy for poisoning and killing people?
Yes AND no. But the solution is not that "worried, responsible outsiders" flock in to the pharmacy and try to regulate it by their personal experiences or the color of the boxes. Instead, they should support pharmacists return to their role and rebuild the counter between them and the customers. And in the long run, realign the "healthcare system" to the meaning of this word...
Now, replace "health" with "knowledge" and got informatics.
Since the 20th century, mankind is a planetary species: science, communication, manufacturing, wars. Thinkers knew that civilization is not a thing but an often unpleasant process of making a peaceful, educated, cooperative homo "sapiens" from each "erectus" kid. The new power needs a "global brain", a transparent cooperation of "knowledge workers" to control it.
They did create an information system that organized 400,000 members solving one, impossible, objective goal - the Apollo program. An icon is Douglas Engelbart. Introduction (1995): https://youtu.be/O77mweZ8-RQ Eulogy (2013): https://youtu.be/yMjPqr1s-cg
However, the world population was (is) not ready. They prefer separating "them" from "us", hate the hardship of learning and choose the cheap illusion of knowledge by repeating hollow cliches. Add the dream of becoming rich and famous, let them use the infrastructure created above and you get the current Idiocracy. An icon is Elon Musk. Prediction (1959): https://youtu.be/KZqsWGtdqiA?t=101
As a bridge person between accounting and IT, you can do more. - BE AWARE that 1945-1972 was the golden age and Douglas Engelbart represents that "state of the art". - DEMAND anyone claiming to be an IT person to demonstrate the same moral and professional attitude. - DON'T ACCEPT less from "us".
"Building on his shoulders" is another thing.
Here is his analysis (1962) behind the Mother of All Demos. It has one key paragraph ignored even by his followers. https://bit.ly/Engelbart_AI
It relates to Ted Nelson's Xanadu and ZigZag (document and graph DB vision). Combined with Chomsky's research it shows a gap in the proof of Godel's Incompleteness Theorem. That is the key to Turing's true challenge: define "machine" and "thinking". The Neumann Architecture CAN handle that, while the Harward is a dead end street. Conclusion: informatics is the necessary and sufficient doctrine of AGI as Augmenting Global Intellect, everything else is garbage.
This paragraph costs a lifetime and is worth it.
Meanwhile, our civilization is literally committing suicide and you are right: mainstream IT is part of the problem. About the necessary paradigm shift, here is another message from 1973: https://youtu.be/WjR6nHhc6Rg
Szerintem nem az a kérdés, hogy hisz-e valaki a jövőben, hanem hogy képességeihez mérten érte vagy ellene cselekszik, egyáltalán érti-e amit csinál, amiről beszél. Ez gyakorlatilag lehetetlen, ha nemhogy nem érti de nem is ismeri a múltat. ("Óriások vállán állva" az üres fecsegés is messzebbre hangzik...)
A múlt nem csak kijelöli a 10 kérdés közül a lényegeseket, de választ is ad rájuk. Tapasztalatom szerint viszont ezek nem csak a nagyközönség, hanem informatikai doktori kutatás környezetében is kívül esnek a komfortzónán (streetlight effect).
The Apple Vision Pro is mind blowing in many ways and signals an important inflection point in the industry. But there is also a lack of clarity in how this all comes together in devices that we'll want to take out into the world and use on a daily basis. I call it the "messy middle." Camera/screen based MR/AR devices are great ways to preview the future, to test and learn, and take us toward the future devices that will be a part of our daily lives in a big way. My plea to the industry-- let's not lose sight of the ultimate goal: devices that can connect us to the real-world and people around us and make our experiece as human beings out in the world richer and better. I wrote a bit about how we view the future of AR here: https://lnkd.in/gquivyQn.
I see a philosophical difference manifested in AR hardware. Can you see through the device, or does the middle-man block your eyes with its screens and transfer the view from its cameras? I think I understand why Apple joined to the heavy-weight class as eventually, it will win there with its experience, momentum and capacity. But is that the right way? Should "augment" really mean separate, remix and project? I don't think so. A proper 3D augmentation over a directly visible environment is the future I would vote for. I don't want anyone to "immerse" in an artificial world, deal with motion sickness, bump into objects on a software glitch. Rather let them see the real world but spice it with a modest bubble of additional knowledge.
I don't think Apple is saying "This" is how AR should be done. They compiled a tech stack that was fit for purpose for a strategy and added in pass through AR because they could. It's like saying GM shouldn't make passenger vehicles because we need pick up trucks. Different devices for different jobs. In terms of the AVP, it is a reasonable solution using the critical mass of the tech available today. When see-through optical has enough tech in place to build enough useful features for a class of usage, then maybe they will play there also.
AVP/Apple: They desperately want a “new iPhone moment”. This is not a weird connector, a bad keyboard or a fanless machine that cooks itself. They will not let this go easily.
AVP itself: this device has no “job”, it is a general consumer device. We remember how mobile phones moved from socially rejected awkward slabs to critical part of our individual and social life, unconsciously redefining “presence”.
AVP-like job: the head mounted display in fighter planes. The key reference frame is the plane and its sensors, the HMD must know its position relative to the plane. Not a random street.
AR in general: Damocles (1968) appeared right after Sketchpad (1963). Visual computing and AR is here from day one of informatics but with no "real job". I happen to have one: my system manages all data in a dynamic semantic graph (now testing on the full SEC EDGAR export on my laptop 🙂 ).
Another use case: the AR glass is a dumb, see-through screen and motion capture dots. People go into a conference room with lots of cameras. The 3D interactive hologram in the middle is projected for all participants. Cheap, safe, can be done today. Only the profit margin is low.
I actually had a bit of a hard time following all of your thoughts, but context can be hard in this minimal channel. I enjoy exploring new perspectives but I can't comment on much. 'They desperately want a “new iPhone moment”' I agree with that. I don't think this is it. I love that they are driving the market but I don't think they have a leading solution yet. They may make a market but noting like iPhone. iPhone sold 1m plus in it's first year and 10m plus in year 2. The AVP projection is 600k year 1. They sold out 40k in a day, now preordered out to about 80k total. I think everything after 200k is going to be a slog. Only time will tell. It won't be a flop but won't be a killer device either. Or this post might embarrass me in the future.
Anyways. The listed aspects are those we should talk about to evaluate AR as a technology, medium or social phenomenon. But Apple with AVP changes the topic to market penetration and profit, and with gigantic effort that only they can invest, may push it through and move this product from awkward to desirable for the public. AVP must not be a success as a product to make this the next iPhone moment.
That turned mobile communication, a technology that could be available for $100 and maybe even without charger (microcontrollers, solar panel, eInk display) to an area of entertainment market with billions of fragile but beautiful glass slabs every year that already replace / overflow our eyes and memory, each for $1000 (fake figures, just the magnitude). And green if you exclude Ghana and alike. https://en.wikipedia.org/wiki/Agbogbloshie
AVP makes the Hitchhikers' Peril Sensitive Sunglasses or "the blind lead the blind" phrase so real. (sorry for venting)
Larry Rosenthal Reading your comments I thought you should refer to Postman - and there you are! I see the beauty of this lecture to the Apple developers in 1993. Quote: "Television should be the last technology we will allow to have been invented and promoted mindlessly" https://www.youtube.com/watch?v=QqxgCoHv_aE&t=5285s
I started talking to computers (coding) at 12, now I am 51 with a whole life doing that. I see a crucial moment when this ("my"!) industry wants to strap a screen on the face of people, completely isolating them from the reality (yet acknowledging that we live in a world in which they have reasons to prefer that).
But I also admire the Apollo program and the lesson they learned when in a go-fever they burned the Apollo-1 crew during a test, worded perfectly by Gene Kranz. https://www.youtube.com/watch?v=9zjAteaK9lM
Building a technological civilization in general, and altering the human perception of the world on individual and community level in particular, is also "terribly unforgiving of carelessness, incapacity and neglect". I know that my words have no weight, but for whatever this counts, here they are:
Informatics is more like the old story of Pandora's box. We were not careful and have all the misery out in the world but we have to open it again to find the hope. I went back to the University after 20 some years in the industry and via my research I finally met the "founding fathers of informatics" who saw all this coming. You find a short summary here from 2018 (now trying to create some videos in my spare time but that's not my comfort zone for sure) https://mondoaurora.org/TheScienceOfBeingWrong_KAIS.pdf
For motivation, look at Douglas Engelbart, his goals, achievements and modesty (and the date!). I did my research, his results are massive, today's informatics only scratches the surface hunting for profit. Imagine if we start listening to people like him instead of current "icons". https://www.youtube.com/watch?v=PjWhQiwJzKg
Lorand Kedves the good news is sometimes, eventually, we do. Today the smog in the air, the smoke in restaurants, are all mostly gone in western cities. Smoking was as common as driving leaded fuel cars that got 8 miles to the gallon. Sometimes actions in society change. Sometimes it takes a civil war to change an action as well.
Larry Rosenthal Does this mean I wasted too much time taking seriously those "existential threat orgs" like the Cambridge University or the MIT or the UN? 🙂 Or thinking that actions without understanding like that of Edward Snowden (From Russia with Love) or Aaron Swartz (no joke here, RIP) may not help? https://www.youtube.com/watch?v=9vz06QO3UkQ
I think you are right on MIPS. But I have been payed for clear thinking in rough situations and still here (with some more or less managed psychosomatic issues). My conclusion is that mankind needs a paradigm shift. The definition of that state is that there is no other option. (... and it is not a screen strapped on our faces showing the Brave New World - another Postman ref... 🙂 ) https://neilpostman.org/
Lorand Kedves ironically i didnt know of postman much in 93... i knew mcluhan much more.. as for his quote from 93.. maybe he got it from me.;) “ I’ve seen the future of the Metaverse and it looks like 1980's TV “ ,,, this was all part fo tHUNK! the digital network which we began in 92;) published as early MAC diskettes.;) BUT Postman, McLuhan and Chayefsky should all be mandatory learning today. but its probably too late. sigh.. i also lamented nback then that i never got to make real spaceships as i did in my college thesis, since by the time i graduated in 85 the worlds money was now stopped from going to reality and all investment was in the virtual of the PC or movie. So i made lots of tv and video game spaceships instead from 85-95. since i had to eat.;)
Lorand Kedves we do need a paradigm shift,. but certainly the stanford/ mit/ eff folks were not the people who we should have allowed to make the previous one.;) aaron died for their sins.
Larry Rosenthal I think I found a more constructive approach.
Institutions are by definition bound to the system and the current paradigm, thus work against any real shift. That can only come from "insane" individuals, as logical thinking based on a new paradigm is nonsense looking from the old one. In older words, "though this be madness, yet there is a method in it". In areas like physics you are lucky because you can use an equipment to show that your theory works. https://en.wikipedia.org/wiki/Leo_Szilard#Columbia_University
But in informatics, you work against human nature, quoting Postman: As Huxley remarked in Brave New World Revisited, the civil libertarians and rationalists who are ever on the alert to oppose tyranny “failed to take into account man’s almost infinite appetite for distractions”. https://neilpostman.org/
Aaron lived a meaningful life chasing values beyond those of a "consumer society". He did not know enough about the past, substituted this gap with faith in people and communities. He did not lose against sins of individuals but roles they played.
On the other hand, I did my research, know that don't say much new but still against the current understanding. Here it starts. https://youtu.be/u-TFazXf_RU
Lorand Kedves the adults in the room failed him. Simple as that. They have failed most children who came after them. Many of those children have finally awakened. Most are not self blaming, they are getting very angry. Machines may hold man at bey longer than man alone, but soon they to age and fail.
Larry Rosenthal ... and adequately TRAINED (by the same institutions you blame because there is no better alternative).
In IT, you can't quote Bob Martin enough: "... if we are doubling every five years, then we always have half of the programmers less than five years of experience, which leaves our industry in a state of perpetual inexperience... the new people coming in must repeat the mistakes made by everyone else over and over and over and over, and there seems to be no cure for this..." https://youtu.be/ecIWPzGEbFc?t=3092s
Before saying so what, IT is a changing field, ask yourself if you think a commercial pilot or a brain surgeon is "experienced" after 5 years. With 30+ behind my back here, I can safely say: hard core IT is beyond reading the marketing materials of the latest tools and languages.
Informatics is literally brain surgery on civilization level. No wonder that it fails with the current bazaar attitude.
Larry Rosenthal That's why I so much respect Postman, an arts expert who could precisely analyze and predict the humane consequences of technology and engineering. To me, both other questions around "what" (products, services, stories) are equally important: "why" (arts and science) and "how" (technology and engineering). I agree, one should be expert in one - but be aware of and respect the other.
Larry Rosenthal "... i never got to make real spaceships as i did in my college thesis, since by the time i graduated in 85 the worlds money was now stopped from going to reality and all investment was in the virtual of the PC or movie. So i made lots of tv and video game spaceships instead from 85-95. since i had to eat.;)"
I missed this important comment... thank you!
I think I had more luck. I met with the Tao Te Ching and started programming my first computer, a Commodore 116 around the same time at age 12. I got CS BSc in 1994 but only met the founding fathers like Engelbart and critics like Postman after I returned to the academy at age 43 (CS MSc, half PhD). I had the privilege to spend 25+ years working on (and with) what I dare to call AI (far from the popular "state of the art"). Of course not on the surface but behind any paying jobs until I hit the glass ceiling, got fired, started again elsewhere. The money was just enough to raise three sons, not more.
“Luck is what happens when preparation meets opportunity...” (correction, not Seneca) as I started this lecture 10 years ago. It aged well, like the picture at "I see the storm coming" is the Maidan Square riot, Ukraine, 2013. https://mondoaurora.org/TasteOfLuck.pdf
Civilization is another story. This is a most tricky form of the behavioral sink, when accumulated knowledge and tools free individuals from the constant struggle for survival. It moves the focus to communities (tribes, nations, ideologies, see also Dawkins and the meme theory) to the ultimate global level. Now the threat is the collapse of knowledge transfer under the power of the very technology invented to support it.
Can't quote JCR Licklider enough (1964) „... the "system" of man's development and use of knowledge is regenerative. If a strong effort is made to improve that system, then the early results will facilitate subsequent phases of the effort, and so on, progressively, in an exponential crescendo. On the other hand, if intellectual processes and their technological bases are neglected, then goals that could have been achieved will remain remote, and proponents of their achievement will find it difficult to disprove charges of irresponsibility and autism.”
Better to realize the importance of individual responsibility as part of the education, not under the threat of death