Regarding this article, The Problems with AI Go Way Beyond Sentience
2022.08.09.
Dear Noah,
I read your article that on the surface talks from my heart except for the optimistic conclusion related to academy and community. In my experience, this does not work that way. For example,
Those who refer to the Turing test do not seem to care about its definition, even when the clues are highlighted on the very first page...
I also asked the OpenAI folks about sentience when they had an open forum back in 2016. And yes, I offered an objective definition with levels as follows:
At OpenAI gym.
May 14 08:31
I would ask you a silly question: what is your definition of "intelligence"? No need to give links to AI levels or algorithms, I have been on the field for 20 years. I mean "intelligence", without the artificial part, "A" is the second question after defining "I". At least to me :-)
May 14 21:47
@JKCooper2 @yankov The popcorn is a good idea, I tend to write too much, trying to stay short.
@daly @gdb First question: what do we examine? The actions (black box model) or the structure (white box)?
If it's about actions (like playing go or passing Turing test), intelligence is about "motivated interaction" with a specific environment (and: an inspector who can understand this motivation!). In this way even a safety valve is "intelligent" because it has a motivation and controls a system: it is "able to accomplish a goal". Or a brake control system in a vehicle, a workflow engine or a rule based expert system.
However, white box approach: how it works is more promising. At least it enforces cleaning foggy terms like "learn", "quicker", or how we should deal with "knowledge representation", especially if we want to extract or share it.
In this way, I have starter levels like:
- direct programmed reactions to input by a fixed algorithm;
- validates inputs and self states, may react differently to the same input.
So far it's fine with typing code. But you need tricky architecture to continue:
- adapts to the environment by changing the parameters of its own components;
- adapts by changing its configuration (initiating, reorganizing, removing worker components).
So far it's okay, my framework can handle such things. However, the interesting parts come here:
- monitors and evaluates its own operation (decisions, optimization);
- adapts by changing its operation (writes own code);
- adapts by changing its goals (what does "goal" mean to a machine?)
At least, for me artificial intelligence is not about the code that a human writes, but an architecture that later can change itself - and then a way of "coding" that can change itself. I did not see things related to this layer (perhaps I was too shallow), this is why I asked.
May 16 06:10
@gdb Okay, it seems that my short QnA does not worth serious attention here. I have quite long experience with cognitive dissonance, so just a short closing note.
Do you know the Tower of Babel story, how God stopped us to reach the sky? He gave us multiple languages so that we could not cooperate anymore. With OpenHI ;-) this story may resemble the myriads of programming languages, libraries and tools - for the same, relatively small set of tasks, being here for decades. (I have been designing systems and programming for decades to get the pain of it - see Bret Victor for more.)
So my point here: Artificial intelligence is not about algorithms, python codes, libraries, wrappers, etc. that YOU write and talk about. All that is temporal. (And by the way, AI is NOT for replacing human adults, like Einstein, Gandhi, Neumann or Buddha. It is only better than us today: dreaming children playing with a gun. hmm... lots of guns.) However...
When you start looking at your best codes like they should have been generated. When you have an environment that holds a significant portion of what you know about programming. When it generates part of its own source code from that knowledge to run (and you can kill it by a bad idea). When you realize that your current understanding is actually the result of using this thing, and that you can't follow what it is doing because you have a human brain, even though you wrote every single line of code. Because its ability is not the code, but the architecture you can build but can't keep in your brain and use it as fast and perfect as a machine.
By the way, you actually create a mind map to organize your own mind! How about a mind map that does what you put in there? An interactive mind map that you use to learn what you need to create an interactive mind map? Not a master-slave relationship, but cooperation with an equal partner with really different abilities. I think this is when you STARTED working on AI, because... "Hey! I'm no one's messenger boy. All right? I'm a delivery boy." (Shrek)
Sorry for being an ogre. Have fun!
Since then I learned that with this mindset, you can pass the exams of a CS PhD, but you can't publish an article, the head of your doctoral school "does not see the scientific value of this research", you don't get response from other universities like Brown (ask Andy van Dam and Steve Reiss) or research groups, etc.
So, I do it alone, because I am an engineer with respect to real science, even though I have not found a single "real" scientist to talk with. Yet.
Best luck to you!
Lorand
2022.08.11.
2022.08.12.
Hello Noah,
Thanks for the response to the message in the bottle. Before going on, a bit of context.
I used to be a software engineer, as long as this term had any connection with its original definition from Margaret Hamilton. Today I am "Solution Architect" at one of the last and largest "real" software company. You know, that gets its revenue from creating information systems, not mass manipulation (aka marketing), ecosystem monopoly etc. (Google, Apple, Facebook, Amazon, Microsoft, ... you name it).
When I started working on AI in a startup company, we wrote the algorithms (clustering, decision tree building and execution, neural nets etc.) from the math papers in C++, on computers that would not "run" a coffee machine today. The guy facing me wrote the 3D engine from Carmack's publications; in spare time he wrote a Wolfenstein engine in C and C++ to see how smart the C++ compiler is. I am still proud of that he though I was weird. Besides leading, I wrote the OLAP data cube manager for time series analysis, a true multithreaded job manager, and the underlying component manager infrastructure, the Basket, later learned that it was an IoC container, the only meaningful element of the "cloud". I was 25.
I saw the rise and fall of many programming languages and frameworks, while I had to do the same thing all the time in every environment: knowledge representation and assisted interaction, because that is the definition of all information system if you are able to see abstraction under the surface. I followed the intellectual collapse of IT population (and the human civilization by the way), fought against both as hard as I could. Lost. Went back to the university at 43 to check my intelligence in an objective environment. Got MSc while being architect / lead developer at a startup company, then another working for the government. Stayed for PhD because I thought what else should be a PhD thesis if not mine? I had 20 minutes one-on-one with really the top Emeritus Professor of model based software engineering, a virtual pat on the shoulder from Noah Chomsky (yes, that Chomsky), a hollow notion of interest from Andy van Dam, a kick in the butt from Ted Nelson (if you are serious about text management, you must learn his work), etc., etc., etc. In the meantime, I looked for communities as well, like published the actual research on Medium, chatting on forums like LinkedIn, RG, ... Epic fail, they think science is like TED lectures and Morgan Freeman in the movies... and oh yes, the Big Bang Theory. :D
Experience is what you get when you don't get what you wanted. (Randy Pausch, Last Lecture) I learned that this is the nature of any fundamental research and there is no reason to be angry with the gravity. The Science of Being Wrong is not a formal proof of that, but with the referred "founding fathers", a solid explanation. Good enough for me. Side note: of course, you can't publish a scientific article that among others states that the current "science industry" is the very thing information science was aimed to avoid before it destroys the civilization. See also, the life and death of Aaron Swartz. Yes, I mean it.
Back to the conversation.
If anyone carefully reads the Turing article instead of "yea yea I know", finds the following statements (and only these!)
- We don't have a scientific definition of intelligence.
- We tend to define intelligence as something we think it is intelligent because it behaves somewhat like us.
- The machines will eventually have performance enough to fulfil this role.
If you also happen to know about the work and warnings of Joseph Weizenbaum (the builder of the ELIZA chatbot) and Neil Postman (the "human factor" expert), then you will not waste a single second of your life on nn-based chatbots, whatever fancy name they have. I certainly do not do that, although understand how fantastic business and PR opportunity this is. For me this is science and not the Mythbusters show where you break all the plates in the kitchen to "verify" gravity (and make excellent sales opportunity for the dishware companies).
You also wrote that "Instead of talking in circles about how to use the word “sentience” (which no one seems to be able to define)"
I repeat: I have this definition with multiple levels quoted in the part you "skimmed". And use these levels as target milestones while building running information systems in real life environments. For the same reason, I stopped trying to write about it because nobody puts the effort to read what I write (general problem), I write the code instead. A code that I can see one day generate itself completely (partial self-generation in multiple languages for interacting multi-platform systems is done). You find a partially obsolete intro here - GitHub, etc. also available from there.
So, thank you for the support, but I am not frustrated about academy, I understood how it works, cows don't fly. The painful part is understanding that they never did, it's just self marketing. I am kind of afraid of losing my job again right now, but that's part of the game as I play it.
Best,
Lorand
2022.08.13
FYI, this is where "your kind" abandons the dialog all the time, lets it sink under the guano of 21th century "communication". Been there, done that all the time, no problem. So just one closing note while I am interested in typing it in.
At least I hope you realize: a chatbot will never generate the previous message. I am not pretending intelligence by pseudo-randomly select some of the trillions of black box rules collected by adapting to the average of the global mass. I am intelligent because I create my rules, test and improve by using them, keep what works and learn form what does not. Another constructive definition and if you think about it, the direct opposite of a chatbot or the whole "emerging" tech-marvel-cargo-cult.
We both know "infinite mass of monkeys in infinite time will surely type in the Hamlet". But please consider that this is not the way the first one was created, and none of the monkeys will be able to tell the next Hamlet from the infinite garbage. Similarly, I may have a nonzero chance to create a conscious information system, even if I do it as a public project on GitHub, it will die with me because nobody will be able to see it. Btw, this is a valid conclusion of Turing's article (and the reason why Vannevar Bush wrote the As We May Think article and initiated the computer era).
Namaste :-)