A következő címkéjű bejegyzések mutatása: (artificial) intelligence. Összes bejegyzés megjelenítése
A következő címkéjű bejegyzések mutatása: (artificial) intelligence. Összes bejegyzés megjelenítése

2025. május 22., csütörtök

Stargate... Another few billons for Sammy's Toy Factory?



@lkedves1 day ago
25+ years in working with AI (NOT the current chatbot bubble) and the true pioneers of informatics say: it does not matter how far you go into a dead end street because you did not even know that there were maps showing where to go. The question is: when do you realise the mistake and turn back from it... (no problem if you delete this comment, just a minority report)


@rwlurk
1 day ago
there is no insatiable demand for AI, but there is insatiable demand from investors for more growth in tech stocks, which are propping up the USA's security markets


@Abouttime-p8u4 hours ago
@lkedves it just might be too lste by then... AI makes work so much easier though, 10 hours of work turn into 1


@Abouttime-p8u4 hours ago
@rwlurk My reply: Have you seen how many billions of pare using AI? Just from the time ChatGPT started to a year later it tremendous growth. Sure, there is also a lot of demand from stock investors interested in companies related to AI.



@lkedves27 minutes ago (edited)
@Abouttime-p8u Ironically, you are right but maybe not for the reason you think. Social media is not a big fan of facts and real science, I try to be as lightweight as possible here. Let's quote Alan Turing, highlighting the keys that nobody seem to read. 

--- 
1. The Imitation Game 
I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think". The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words. 
The new form of the problem can be described in terms of a game which we call the 'imitation game." 
--- 

Understand? 

Turing stated that both the goals and the means of this current "mainstream AI" are absurd. The "imitation game" does not answer the question if machines can think but replaces it. Turing wrote this article in the Mind (a Quarterly Review on Psychology and Philosophy !) to explain why people should stop calling the Universal Turing Machine a "thinking machine". I don't think he could imagine that a whole industry will be based on misinterpreting the first page (and completely ignoring the next 21) Or rather, just quoting Benedict Cumberbatch from the movie because he is so cute. Scientists... 

I have spent good 25 years on finding a definition of these terms on par with the UTM. I follow Douglas Engelbart, a mentioned pioneer when read AI as Augmenting Intellect, because his forgotten results hold the key. My daily job is to deliver solutions to "impossible missions" in real life with real deadlines and restricted budgets, not hype riding. 
The ratio is not 10 to 1 but 1 to 0. 

On the other hand, social media defines "thinking" as skimming over the popular news following the latest hypes (including videos like this), conduct some excited discussions with relatable buddies in your idea bubble and regurgitate some word salad as your well-founded opinion. And you are absolutely right. 
LLMs can do this much better than 10 to 1. 

But, there is a question. Is this second path really worth billions of dollars and terawatts of consumption? 

Back in the days before LinkedIn turned to a GeekBook, I picked up a quote that still holds today: 

I suppose one way for a machine to pass the Turing test is to wait until the quality of actual human conversation is so bad that a bot could be an improvement. This seems to be happening here. 

Peace... 


---- Interestingly, the first comment remained public and got a response to it ----


I've been thinking about this in addition to a "maximum wage" in the USA. Thinking of ceilings for progress. I'm becoming ever-more convinced that we should impose ceilings as a society, to slow the rate of progress for the sake of security and happiness. 

EACC folks (effective accelerationists) encourage us to barrel forward because AGI will solve all of our problems. I have extreme doubts. We will see how it okays out.


Quite the opposite. If you understand that our monetary system is based on exponential functions (the definition of the yearly interest on your bank deposit, or the GDP), combine it with enough elementary mathematics or informatics to know what it means, you realise that there is no ceiling. The repeated hype-rides and falls are simple consequences of the systemic ignorance, under various, sometimes bittersweet ideologies. 

I often watch The Big Short, you only have to replace the few words and get the AGI story - the two mortgage brokers are just like Sammy boy and the whoever that showed the site to Emily. I play a mix of Mark Baum and Michael Burry. Informatics offered a solution but the middle men took over, check Joseph Weizenbaum, the creator of the first chatbot, ELIZA, 1966. For contrast, Softbank invested in all hypes so far: Theranos, FTX, WeWork... I would rather consider that a red flag. Let alone, a praise from Trump.

I wonder if Bloomberg shadow bans this comment individually, or I am banned as a person. Testing, testing... 
😉

2025. május 10., szombat

What Is The Future Of Programming Languages?



The questions are excellent, the answers are "state of the art", the latter is not a compliment in this case. Here is a different take on the graph part.

  1. You have two fundamentally different ways to transfer and curate knowledge, A: storytelling (very human, imprecise) or B: knowledge graph building (hard for a human, as precise as can be). 👉 JCR Licklider, Libraries of the Future (1965, book).
  2. STEM knowledge is always B: a graph. When you have a problem in physics, biology, math, medicine, ... it's NOT about how you sing it or what language you use, but to build a precise network of property pages filled with data and linked to each other. The very terms (labels of data and links) are also graphs (DSLs). Information systems are graphs, too. In the computer's memory, you have flowcharts of the algorithms and you use the memory to hold the content of those property sheets. 👉 Ivan Sutherland Sketchpad (YouTube)
  3. EVERY program is a combination of a DSL set (the "meta" layer of classes, members, functions and their parameters) and a bunch of stories (the code of the functions and procedures), quoting Bob Martin: the software is assignment statements, if statements and while loops 👉 Future of Programming (YouTube).
  4. The real problem is that the text-based tools make us focus on the storytelling, we only see the big list of features or use cases, instead of the DSLs that allow us to describe and solve the atomic problems (modularity, KISS, DRY, SRP. ...). We have already solved every possible atomic problems literally millions of times, but repeating them every time (copy-paste or in "modern cases", LLMs🤦‍♂) in the endless possible combinations, and that grows every day. 👉 Alan Kay's Power of Simplicity (YouTube)
  5. Introducing new programming languages that do the same has one effect: it even erodes the "burden" of cumulated human experience and the codebase, start it all over again, not solving the fundamental issue.

Problem: Information systems are graphs. Storytelling is not the right way to interact with graphs. The blamed imprecision is manageable "human error" in the case of graphs but inevitable fatal block in text-based programming. Teaser 👉 Bret Victor, Future of Programming (YouTube); hard-core answer 👉 Douglas Engelbart: Augmenting Human Intellect (report)

Solution: STEM languages are graphs (DSLs). THE future programming language is the DSL of information systems. The same that we have in physics, mathematics, biology, ...

Question: has anyone read this far? 😉

---[ discussion under a comment ]---

CallousCoder
Your behaviour is like the old horse and cart people against the automobile. It's nonsensical, the technology is here to stay. So either adopt it or you'll go extinct. You know that good developers are terrible managers, right? ;) Also I don't get it, what the resistance is. Whether you ask a junior or medior to implement something or you ask an LLM. It's no different other than that the LLM just does it and doesn't nag especially after the 2nd or 3rd iteration ;)

CallousCoder
“bro” is 52 years old and didn’t take philosophy but EE and CS.

lkedves
Age is just a number (happens to be the same...) Check the Mother of All Demos, that was real technology behind the Apollo program, while this chatbot AI is just another stock market bubble. Side note: before the previous AI winter, we won Comdex '99 with a data mining / AI tool. Back then people could read the first paragraph of Turing's article, the definition of "the test" instead of trying to implement cartoon dreams... (including a Nobel prize winner psychologist) 

But you got the point with "Modern software is a disease!" LLMs learn from their sources, kids copy-paste them to real software, LLM learns from them again. Quantity goes up, quality goes down. LLM companies use human slaves to avoid stupid mistakes in everyday tasks after the first flops. Regardless of we accept this as a solution, who will censor generated codes? 

Dead end.

cyberfunk3793
AI is obviously going to fix data races and buffer overflows and every other type of bugs you can think of. You don't understand what is coming if you think it's just hype. I don't know if it will be 5 years or 50, but at some point humans will only be describing (in human language) what they want the program to do and reviewing the code that is produced. Currently AI is already extremely helpful but still makes a lot mistakes. These mistakes will get more and more rare and the ability of AI to program will far exceed humans just like computers beat us at chess.

TCMx3
Chess engines did not need AI to curbstomp us at chess. Non-ML based engines with simple table-bases for endings were already some 700 points stronger than the best humans. Sounds like you don't actually know very much about chess engines lmao.

CallousCoder
btw playing chess with an LLM is a hilarious experience. If it looses it brings back pieces from the dead or just “portals” them into safety.

lkedves
[retry, I promise I leave if disappears again]

You may have missed so I repeat. We won Comdex '99 with a data mining / AI tool (and there is nothing new on this field except the exponential growth of the hardware). Since then I have worked on refining knowledge graph management in information systems under every single project I touched, often delivering "impossible missions". I work together with the machine because I follow a different resolution of AI: Augmenting Intellect (Douglas Engelbart) on systems that are of course smarter than me (and generate part of their own codes for years from these graphs). Right now at the national AI lab in a university applied research project that I will not try to explain here.
You find some of my conclusions with references to sources in my comment added to this video (11th May). 

I know the pioneers who predicted and warned about what we have today (recommended reading: Tools For Thought by Howard Rheingold, you find the whole book online). One of them is Alan Turing, who asked people not to call the UTM a "thinking machine" and wrote an article in Mind: A Quarterly Review of Psychology and Philosophy about the dangers of making such claims without proper definitions. Poor man never thought that in a few decades "IT folks" would think this was an aim. Or Joseph Weizenbaum, the guy who wrote the first chatbot, ELIZA. 

I know why your dream will never happen because I know informatics (its original meaning, not the business model Gates invented) was against this fairy tale. LLMs are just try to prove that old story from quantum mechanics, that infinite monkeys in infinite time will surely type in the whole Hamlet. The problem is that we don't have infinite time and resources, and the goal is not repeating but write the next Hamlet. Those who initiated informatics, made this clear. Start with Vannevar Bush: As We May Think, 1945. 

@TCMx3 , @CallousCoder - thanks for your answers... 🙏 Another excellent example - in chess, you have absolute rules.

In life, we know that all the laws we can invent are wrong (Incompleteness theorem) and thinking means improving the rules while solving problems and taking the responsibility of all errors. The ultimate example is the Apollo program with Engelbart's NLS in the background, that's how THEY went to the moon. We go to the plaza to watch the next Marvel story in 4D, now with the help of genAI. If we go with the question of predicting the next 50 years, look up "Charly Gordon Algernon 1968" here on YouTube.

---[ This answer "disappeared" for the second time so I left the place ]---

2025. február 19., szerda

Haiku

The anti social medium. The Matrix dessert.

The crack on Gödel's Incompleteness Theorem.

The ultimate form of non-transformable information.


2025. február 5., szerda

The Experience of Being Wrong

How could I be THIS STUPID???

Now, please stop reading and remember when you asked this, not just lightly but with the strange mix of real anger and shame.


...


That was the last time you learned something really important and this is the only way to it. It happened to me yesterday. At age 52 I think this is a very positive feedback: I can (very hardly but still) lower my ego and learn. Here is the story.

I have been repeating for years in every context that JCR Licklider separated transferrable and non-transferrable knowledge, and a root cause of today's mess in IT (and consequently, everywhere) is the fact that we forgot this. Banging my chest like a gorilla, like here...

But yesterday, as my young colleague is creating a proper scientific publication, we started looking for the exact reference. To my greatest surprise, I did not find it. In desperation, I started reading Libraries of the Future again, and realized (thankfully and ironically on page 2!) that...


Licklider never wrote that.

Here is the actual quote:

We delimited the scope of the study, almost at the outset, to functions, classes of information, and domains of knowledge in which the items of basic interest are not the print or paper, and not the words and sentences themselves —but the facts, concepts, principles, and ideas that lie behind the visible and tangible aspects of documents. The criterion question for the delimitation was: "Can it be rephrased without significant loss?" Thus we delimited the scope to include only "transformable information." Works of art are clearly beyond that scope, for they suffer even from reproduction. Works of literature are beyond it also, though not as far. Within the scope lie secondary parts of art and literature, most of history, medicine, and law, and almost all of science, technology, and the records of business and government.

He talks about "transformable information", not "transferrable knowledge".


What happened? Had I forgotten to read???

No, but I was not able to at that moment. This paragraph held a key to a question of computerized knowledge management I struggled with for decades, literally. When it hit me, my mind was blown immediately and started restructuring itself. I followed, remembered, and kept quoting my own revelation instead of the text that I thought I was reading. 


But why?

Knowledge in our minds is always a network: some attributes of and relations between "things". To store or transfer our knowledge of a topic, we "export" the related part of this network in a presentation, text, figures, pictures, videos. Other people will try to integrate this content with their existing knowledge. Here comes the trick: for those who can do this without changing anything in their minds, this was not "information" because information is only the part that you did not know and could not figure out from your existing knowledge.

At the first time I could not integrate Licklider's original message with my existing knowledge, it only triggered a change that took a long time. Now, when I revisited this paragraph, it was new again but now I could actually read it and integrate with my current knowledge. Fun fact: the word "respect" does not mean "obey" or "accept" but re-specto: examine it again.


Real information is like good chilli: it burns twice.


So, how do I read the message now?

This is a simple way to tell the difference between transformable and non-transformable information. I quoted the rest correctly: informatics (the "libraries of the future") should work only with transformable information.

  • Transformable means you can say it hundreds of ways, the meaning will be the same. You focus on the knowledge graph in your head and try to build exactly the same in the audience: a physical phenomenon or a medical treatment. 
  • Non-transformable information focuses on the message itself and the feelings created by it (not less important but totally different). With different tone, wording, or face, the message and the effect significantly changes.


A less nerdy example

I think 99% of modern pop music is not even information: repeats the same message about a boy, girl, love, hate, etc. that the audience is already familiar. (hashtag metoo?)

But the Sound of Silence is a perfect example of non-transformable information: I already knew the original song but this presentation by Disturbed delivered the message (which happens to be in close relation with the topic of this post).


[This post can be a pair of my more formal article, The Science of Being Wrong (a possible definition of informatics) as here I defined "infonauts" as experts in being wrong...]

2025. január 14., kedd

Responsibility in "Mainstream IT" vs "Golden age informatics"

Charles Hoffman

I have been saying for years and years that more business professionals and liberal arts majors should be paying attention to artificial intelligence. Let me ask a question. When, or if, the tools that these professional computer scientists create go terribly wrong, should the computer scientists be held accountable in any way? If someone relies on these tools based on something some marketing campaign for artificial intelligence proclaimed, who will be held responsible? Buyer beware?

Humans in the AI Loop. AI in the Human Loop. Humans in Control
There is one, similarly dangerous aspect of this question. Assuming that there are no "IT people" who are (with the necessary formal education, knowledge and experience, much more) worried about this situation thus outsiders should enforce discipline on us.

Listen to Bob Martin not only pointing out the core issues but giving an explanation and possible cure for them as well. The problem is that this is too technical for the outsiders and absolutely not popular for the vast majority of self-proclaimed IT people who happened to get old and established without ever receiving proper education. So, they teach the next generation of their cargo cults, blockchain or mainstream AI being the newest ones. [edit: ouch, forgot about the IT cult leaders who first got insanely rich, then start "changing the world for the better", making fame along the way and attract their followers to continue their "heritage"... 🤦‍♂️ ]
https://youtu.be/ecIWPzGEbFc?t=3057

I see nothing special in IT. We live in the predicted global Idiocracy and IT is not immune to it.
Thanks for the video, Future of Programming. That is exactly the sort of thing I am talking about.


Lorand Kedves

Yes, you are "talking about" it while I can list you goals and names of the true IT pioneers, the best minds of the planet. They knew that any technology is exactly as dangerous as beneficial. The difference is that others change what you can DO in the real world, informatics changes what you SEE and THINK of it! Today we handle their warnings as a damned bucket list and of course, that is ignored as a CS PhD research topic.
https://bit.ly/montru_ScienceWrong

The roots were cut when IT became a for-profit venture funded by general business (M$) and rich daydreamers (Apple). I think you will like the ultimate arts person, Neil Postman, trying to educate Apple ("think different" 🤦‍♂️) folks in 1993...
https://youtu.be/QqxgCoHv_aE

IT is the "One Ring" (ref Bob Martin: you can't do anything without "us"). I wanted to use it in direct resource management (1) or education (2) but was ignored. Working in finance is like carrying that ring through Mordor but at least some people listen here and can even find "established" critics like Prof. Richard Murphy (3).
(1) https://bit.ly/montru_OpenSourceCivilization
(2) https://bit.ly/lkedves_Studiolo
(3) https://youtu.be/k5Yo3Y_SMow


Charles Hoffman

This is all interesting. One can compare this to things like the creation of "high fructuous corn syrup" and it effect on the food industry and people's health, mining techniques that destroy the environment, the way healthcare is practiced in the United States, the way the pharmaceuticals industry works to get people to pay pills for the rest of their lives.
Only if you ignore the other side of the coin. Following your analogy, statistically speaking the goods you find in a pharmacy are either useless or outright dangerous, even lethal - yet, we need pharmacies and have been cured by the drugs they sell.

How comes?

Although a pharmacy looks like a shop, it MUST NOT give you what you ask, only the drugs your doctor prescribed after a careful examination, regardless of the money you offer. Theoretically... 🙁 But today we try to operate the pharmacy just like a bakery or a candy store: want to get more profit by giving you whatever you ask, even create marketing campaign, etc. (like the rest of the healthcare system btw.)

So, do you "rightfully" blame the pharmacy for poisoning and killing people?

Yes AND no. But the solution is not that "worried, responsible outsiders" flock in to the pharmacy and try to regulate it by their personal experiences or the color of the boxes. Instead, they should support pharmacists return to their role and rebuild the counter between them and the customers. And in the long run, realign the "healthcare system" to the meaning of this word...

Now, replace "health" with "knowledge" and got informatics.

Does this sound interesting?
Sign me up! Informatics and cybernetics make complete sense to me. What I don't understand is why I don't "see" them in software development.
Since the 20th century, mankind is a planetary species: science, communication, manufacturing, wars. Thinkers knew that civilization is not a thing but an often unpleasant process of making a peaceful, educated, cooperative homo "sapiens" from each "erectus" kid. The new power needs a "global brain", a transparent cooperation of "knowledge workers" to control it.

They did create an information system that organized 400,000 members solving one, impossible, objective goal - the Apollo program. An icon is Douglas Engelbart.
Introduction (1995): https://youtu.be/O77mweZ8-RQ
Eulogy (2013): https://youtu.be/yMjPqr1s-cg

However, the world population was (is) not ready. They prefer separating "them" from "us", hate the hardship of learning and choose the cheap illusion of knowledge by repeating hollow cliches. Add the dream of becoming rich and famous, let them use the infrastructure created above and you get the current Idiocracy. An icon is Elon Musk.
Prediction (1959): https://youtu.be/KZqsWGtdqiA?t=101

You don't "see" informatics as I talk about it because it was lost since 1973.
https://youtu.be/8pTEmbeENF4?t=1741

Rebels may pay with their lives like Aaron Swartz.
https://youtu.be/9vz06QO3UkQ
Thanks for all this information. Now I have renewed motivation. I am now doing this for Doug. Building on his shoulders.

As a bridge person between accounting and IT, you can do more.
- BE AWARE that 1945-1972 was the golden age and Douglas Engelbart represents that "state of the art".
- DEMAND anyone claiming to be an IT person to demonstrate the same moral and professional attitude.
- DON'T ACCEPT less from "us".

"Building on his shoulders" is another thing.

Here is his analysis (1962) behind the Mother of All Demos. It has one key paragraph ignored even by his followers.
https://bit.ly/Engelbart_AI

It relates to Ted Nelson's Xanadu and ZigZag (document and graph DB vision). Combined with Chomsky's research it shows a gap in the proof of Godel's Incompleteness Theorem. That is the key to Turing's true challenge: define "machine" and "thinking". The Neumann Architecture CAN handle that, while the Harward is a dead end street. Conclusion: informatics is the necessary and sufficient doctrine of AGI as Augmenting Global Intellect, everything else is garbage.

This paragraph costs a lifetime and is worth it.

Meanwhile, our civilization is literally committing suicide and you are right: mainstream IT is part of the problem. About the necessary paradigm shift, here is another message from 1973:
https://youtu.be/WjR6nHhc6Rg

2023. november 9., csütörtök

Bletchley Declaration

Comment on LinkedIn

I see two problems here.
1: Our bus is racing downhill off the cliff due to mismanagement.
2: Billionaires motivate self-proclaimed experts with (or without) arts, management or business degree who motivate kids to rush to the controls to "save the world", always on the hype, today AI. They never allow experts to handle the issues.

What expert, you ask?
Those who know the warnings about an anthropomorphic illusion of thinking machines (Turing, Weizenbaum), not understanding informatics (Bush, Licklider, Engelbart, Nelson), worshipping science (Neumann, Szilard) and forgetting that civilization is not given but must be built within each and every member (Asimov, Postman).
And that even conscious actions without the big picture is futile (Aaron Swartz, Edward Snowden).

When in real life you see an accident, you step back, yell for a doctor, call for ambulance, because you know that without relevant knowledge, you can make all worse. With global crises (climate, pollution, ... now AI) every spectator rush to "provide their opinion and help".
One day the hype will be over, Gene Kranz returns: "Let's work the problem people. Let's not make things worse by guessing." I really hope there will still be anything to save.

My 2 cents.

2023. június 21., szerda

VR - oktatás



Mariann Papp
Felelmetes lehetosegek… plusz meg hozzatesszuk a filmeket, amiket a fiatalok neznek es hollywood-bol lassan mar csak a fantasy filmeket kapjuk, marvel universe es hasonlok, egy olyan generacio no fel akik valo elettol teljesen eltavolodnak. Talan meg remenykedhetunk az oktatas erejeben 🤔


Kedves Mariann Papp, "reménykedni az oktatás erejében"?

Sajnos ezt az óhajt nem a magyar valósággal szembeni kontraszt (lásd még, "tanítanék") teszi igazán ironikussá, hanem a tény, hogy az oktatás a technológiával szembeni harcot egy generációval korábban, a Sesame Street szintjén veszítette el. A "tudományos csatornák" és "oktató portálok" által közvetített tudás-illúzió, majd az erre épülő újabb AI/VR hype (az előzőnél én is ott voltam) csak következmény.

A megalapozatlan reménykedéssel szembeni egyetlen alternatíva a valódi szakértők megismerése, akik első (népszerűtlen) dolga az ábrándok elkergetése. Az "új világban" oktatással kapcsolatban Neil Postman olyan, mint Newton a fizikában, hazai környezetben dr. Aczél Petra jó kiindulási alap. Informatikában tanításról "uncle" Bob Martin előadásába érdemes belenézni: https://youtu.be/ecIWPzGEbFc?t=3057

Hasonló módon, Marvel és társai legalább nyíltan gyermetegek, a legnagyobb kárt a felvilágosítónak, lelkesítőnek szánt filmekbe szőtt hazugságok okozzák, mint az Apollo 13, a Good Will Hunting, az Interstellar vagy az Ex Machina. Arról, hogy mennyire "újak" az emlegetett problémák, íme egy filmrészlet 1968-ból. Ki nevet ma ezeken a válaszokon? https://youtu.be/Nb6shvId_XI?t=215

2022. augusztus 13., szombat

That's my secret, monkeys. I'm always angry...

Regarding this article, The Problems with AI Go Way Beyond Sentience


2022.08.09.

Dear Noah,


I read your article that on the surface talks from my heart except for the optimistic conclusion related to academy and community. In my experience, this does not work that way. For example,

Those who refer to the Turing test do not seem to care about its definition, even when the clues are highlighted on the very first page...



I also asked the OpenAI folks about sentience when they had an open forum back in 2016. And yes, I offered an objective definition with levels as follows:

Knocking on Heaven's Door :-D

At OpenAI gym.

May 14 08:31

I would ask you a silly question: what is your definition of "intelligence"? No need to give links to AI levels or algorithms, I have been on the field for 20 years. I mean "intelligence", without the artificial part, "A" is the second question after defining "I". At least to me :-)

May 14 21:47

@JKCooper2 @yankov The popcorn is a good idea, I tend to write too much, trying to stay short.

@daly @gdb First question: what do we examine? The actions (black box model) or the structure (white box)?

If it's about actions (like playing go or passing Turing test), intelligence is about "motivated interaction" with a specific environment (and: an inspector who can understand this motivation!). In this way even a safety valve is "intelligent" because it has a motivation and controls a system: it is "able to accomplish a goal". Or a brake control system in a vehicle, a workflow engine or a rule based expert system.

However, white box approach: how it works is more promising. At least it enforces cleaning foggy terms like "learn", "quicker", or how we should deal with "knowledge representation", especially if we want to extract or share it.

In this way, I have starter levels like:

  • direct programmed reactions to input by a fixed algorithm;
  • validates inputs and self states, may react differently to the same input.

So far it's fine with typing code. But you need tricky architecture to continue:

  • adapts to the environment by changing the parameters of its own components;
  • adapts by changing its configuration (initiating, reorganizing, removing worker components).

So far it's okay, my framework can handle such things. However, the interesting parts come here:

  • monitors and evaluates its own operation (decisions, optimization);
  • adapts by changing its operation (writes own code);
  • adapts by changing its goals (what does "goal" mean to a machine?)

At least, for me artificial intelligence is not about the code that a human writes, but an architecture that later can change itself - and then a way of "coding" that can change itself. I did not see things related to this layer (perhaps I was too shallow), this is why I asked.

May 16 06:10

@gdb Okay, it seems that my short QnA does not worth serious attention here. I have quite long experience with cognitive dissonance, so just a short closing note.

Do you know the Tower of Babel story, how God stopped us to reach the sky? He gave us multiple languages so that we could not cooperate anymore. With OpenHI ;-) this story may resemble the myriads of programming languages, libraries and tools - for the same, relatively small set of tasks, being here for decades. (I have been designing systems and programming for decades to get the pain of it - see Bret Victor for more.)

So my point here: Artificial intelligence is not about algorithms, python codes, libraries, wrappers, etc. that YOU write and talk about. All that is temporal. (And by the way, AI is NOT for replacing human adults, like Einstein, Gandhi, Neumann or Buddha. It is only better than us today: dreaming children playing with a gun. hmm... lots of guns.) However...

When you start looking at your best codes like they should have been generated. When you have an environment that holds a significant portion of what you know about programming. When it generates part of its own source code from that knowledge to run (and you can kill it by a bad idea). When you realize that your current understanding is actually the result of using this thing, and that you can't follow what it is doing because you have a human brain, even though you wrote every single line of code. Because its ability is not the code, but the architecture you can build but can't keep in your brain and use it as fast and perfect as a machine.

By the way, you actually create a mind map to organize your own mind! How about a mind map that does what you put in there? An interactive mind map that you use to learn what you need to create an interactive mind map? Not a master-slave relationship, but cooperation with an equal partner with really different abilities. I think this is when you STARTED working on AI, because... "Hey! I'm no one's messenger boy. All right? I'm a delivery boy." (Shrek)

Sorry for being an ogre. Have fun!


Since then I learned that with this mindset, you can pass the exams of a CS PhD, but you can't publish an article, the head of your doctoral school "does not see the scientific value of this research", you don't get response from other universities like Brown (ask Andy van Dam and Steve Reiss) or research groups, etc.

So, I do it alone, because I am an engineer with respect to real science, even though I have not found a single "real" scientist to talk with. Yet.

Best luck to you!

  Lorand


2022.08.11.


[Response from Noah - private]


2022.08.12.

Hello Noah,


Thanks for the response to the message in the bottle. Before going on, a bit of context.

I used to be a software engineer, as long as this term had any connection with its original definition from Margaret Hamilton. Today I am "Solution Architect" at one of the last and largest "real" software company. You know, that gets its revenue from creating information systems, not mass manipulation (aka marketing), ecosystem monopoly etc. (Google, Apple, Facebook, Amazon, Microsoft, ... you name it).

When I started working on AI in a startup company, we wrote the algorithms (clustering, decision tree building and execution, neural nets etc.) from the math papers in C++, on computers that would not "run" a coffee machine today. The guy facing me wrote the 3D engine from Carmack's publications; in spare time he wrote a Wolfenstein engine in C and C++ to see how smart the C++ compiler is. I am still proud of that he though I was weird. Besides leading, I wrote the OLAP data cube manager for time series analysis, a true multithreaded job manager, and the underlying component manager infrastructure, the Basket, later learned that it was an IoC container, the only meaningful element of the "cloud". I was 25.

I saw the rise and fall of many programming languages and frameworks, while I had to do the same thing all the time in every environment: knowledge representation and assisted interaction, because that is the definition of all information system if you are able to see abstraction under the surface. I followed the intellectual collapse of IT population (and the human civilization by the way), fought against both as hard as I could. Lost. Went back to the university at 43 to check my intelligence in an objective environment. Got MSc while being architect / lead developer at a startup company, then another working for the government. Stayed for PhD because I thought what else should be a PhD thesis if not mine? I had 20 minutes one-on-one with really the top Emeritus Professor of model based software engineering, a virtual pat on the shoulder from Noah Chomsky (yes, that Chomsky), a hollow notion of interest from Andy van Dam, a kick in the butt from Ted Nelson (if you are serious about text management, you must learn his work), etc., etc., etc. In the meantime, I looked for communities as well, like published the actual research on Medium, chatting on forums like LinkedIn, RG, ... Epic fail, they think science is like TED lectures and Morgan Freeman in the movies... and oh yes, the Big Bang Theory. :D

Experience is what you get when you don't get what you wanted. (Randy Pausch, Last Lecture) I learned that this is the nature of any fundamental research and there is no reason to be angry with the gravity. The Science of Being Wrong is not a formal proof of that, but with the referred "founding fathers", a solid explanation. Good enough for me. Side note: of course, you can't publish a scientific article that among others states that the current "science industry" is the very thing information science was aimed to avoid before it destroys the civilization. See also, the life and death of Aaron Swartz. Yes, I mean it.


Back to the conversation.

If anyone carefully reads the Turing article instead of "yea yea I know", finds the following statements (and only these!) 

  1. We don't have a scientific definition of intelligence. 
  2. We tend to define intelligence as something we think it is intelligent because it behaves somewhat like us. 
  3. The machines will eventually have performance enough to fulfil this role. 

If you also happen to know about the work and warnings of Joseph Weizenbaum (the builder of the ELIZA chatbot) and Neil Postman (the "human factor" expert), then you will not waste a single second of your life on nn-based chatbots, whatever fancy name they have. I certainly do not do that, although understand how fantastic business and PR opportunity this is. For me this is science and not the Mythbusters show where you break all the plates in the kitchen to "verify" gravity (and make excellent sales opportunity for the dishware companies).


You also wrote that "Instead of talking in circles about how to use the word “sentience” (which no one seems to be able to define)"

I repeat: I have this definition with multiple levels quoted in the part you "skimmed". And use these levels as target milestones while building running information systems in real life environments. For the same reason, I stopped trying to write about it because nobody puts the effort to read what I write (general problem), I write the code instead. A code that I can see one day generate itself completely (partial self-generation in multiple languages for interacting multi-platform systems is done). You find a partially obsolete intro here - GitHub, etc. also available from there.

So, thank you for the support, but I am not frustrated about academy, I understood how it works, cows don't fly. The painful part is understanding that they never did, it's just self marketing. I am kind of afraid of losing my job again right now, but that's part of the game as I play it.

Best,

  Lorand


2022.08.13

FYI, this is where "your kind" abandons the dialog all the time, lets it sink under the guano of 21th century "communication". Been there, done that all the time, no problem. So just one closing note while I am interested in typing it in.

At least I hope you realize: a chatbot will never generate the previous message. I am not pretending intelligence by pseudo-randomly select some of the trillions of black box rules collected by adapting to the average of the global mass. I am intelligent because I create my rules, test and improve by using them, keep what works and learn form what does not. Another constructive definition and if you think about it, the direct opposite of a chatbot or the whole "emerging" tech-marvel-cargo-cult.

We both know "infinite mass of monkeys in infinite time will surely type in the Hamlet". But please consider that this is not the way the first one was created, and none of the monkeys will be able to tell the next Hamlet from the infinite garbage. Similarly, I may have a nonzero chance to create a conscious information system, even if I do it as a public project on GitHub, it will die with me because nobody will be able to see it. Btw, this is a valid conclusion of Turing's article (and the reason why Vannevar Bush wrote the As We May Think article and initiated the computer era).

Namaste :-)

2022. július 20., szerda

"you'll always be inferior"


It's hard to give a good answer to a bad question. Learning means you realise that you made a mistake. That you were wrong. That you missed the point. This is the meaning of the word. Real learning must feel bad. 

The thing that feels good is edu-tainment, the real danger identified by people who knew how this works (see https://neilpostman.org/ ). However, today you find edu-tainment everywhere because that has a business model - but no education that you would need to become a "knowledge worker". 

You don't feel that you are inferior developer but almost surely, you don't even know what it used to be. Give this guy 5 minutes to explain, and listen carefully. https://youtu.be/ecIWPzGEbFc?t=3056 If you feel weird, that means you at least have a tiny chance to start learning someday.

"Why do my eyes hurt? You've never used them before." (The Matrix)

---

[Of course, deleted immediately - I don't know if YouTube AI or the author, you never know that.]

2021. december 31., péntek

Don't Look Up!



Lorand Kedves
2 days ago (edited)
I am a scientist/engineer studying the root causes of this collapse of reasoning and communication for 15+ years. Someone like Mr. DiCaprio (or rather, Ms. Lawrence) plays in a more convincing way than I could ever present myself, because acting is their profession, not mine - I do things. I can only say that repeating the same message on the same level again and again is part of the problem and has nothing to do with the solution. Which could even have a chance if "famous alternative thinkers" did not waste all the resources on "delivering the message"... 

So, true. “We really did have everything, didn’t we? I mean, when you think about it.” Climate change, a comet, COVID, ... - the actual subject does not matter. I know why you don't look up. I know the science that predicted this, almost exactly by date. Unfortunately, none of the movies of this kind will prepare you to deal with it.

Dacialastun2 days ago
So what are the causes of that collapse? You forgot to say that.........


Lorand Kedves2 days ago (edited)
@Dacialastun Yup, because here comes that scientific yada-yada. This is the part that allows finding a comet, predict its trajectory and perhaps, plan some actions. I wrote a ton about those things, but found nobody who would like to read. You know, the look up part that I miss from this movie. 

If you happen to be my kind... I talk about informatics. From the neo-whorfian hypothesis and its connection with Douglas Engelbart on the technical/infrastructural side and Neil Postman with McLuhan, Licklider on the human effects (from individuals to mankind level) - and many-many other exceptional thinkers. On the other side, its business-oriented collapse through Kay, Jobs, the Google bros, the Tesla guy, and the continuous moaning since "The Wall", etc. 

If not, well... it's all about opening up a global online communication network without being conscious about our ancient mental patterns and now obsolete world model. Mankind grew a global brain but is not ready for a headache of this magnitude. Like, you can do anything with a toy, but if you find yourself controlling a 30 ton excavator, you'd better forget not only about kidding but the "trial and error" approach. And it's not about censorship, "AI Ethics", the "Social Dilemma" or "Humane technology", but about growing up, both as individuals and the global human species. 

Is this somewhat clear, or do I sound like a moron as always (like the movie scientists in that TV show)? Anyway, read The Voice of the Dolphins from Leo Szilard (1961) for a funny intro.

benz_ask2 days ago
I feel the don't look up supporters are take your vaccine and ask no questions..and the scientists that are fired because they reduce the fear and go against government agenda..this film is a double bluff trust me


Lorand Kedves2 days ago (edited)
@benz_ask Exactly, and thank you for illustrating what I mean the global headache. 

I took my vaccines together with my wife and sons and asked no questions. Not because what I feel or whom I trust, but because I am a scientist. I know that I do not know enough to understand or judge the answer of a scientist of a different field. On the other hand, there were multiple hospital doctors in my family, I worked at a pediatric clinic (as a nurse/first line computer support after my BSc in CS, good old days...) for more than a year. My feelings and trust do not come from soap operas or reality shows, but personally knowing people who make me proud to be a human being. I do what they say that I should do, anytime. 

Take this one. You can be a master car mechanic, you still don't start arguing with a carpenter about how he should do his job. Because you are not a carpenter. And of course, you will trust a car mechanic in a carpenter shirt more than a real carpenter (especially when blaming them), because you understand what that fake carpenter says and don't understand the real one. 

Do I "trust the system"? Of course not, because I could explain why it does not work, and this includes fueling the rage of the masses through fake conflicts. Easy and used for centuries, my dear Watson, but with global social media and trained IT experts like myself, it's the perfect storm. Keep this one. You know, trained dogs don't bite the stick, they grab the hand. But the real danger is the one that goes for the neck. And how can you protect yourself? Beat a hundred "hero dogs" with a blue stick, another hundred with a red one, and enjoy the show. They will not even bite the stick anymore, they will attack each other while ward off the trained and dangerous ones. You are safe. :-) 

And please don't trust me. Read carefully, think, decide and then own your responsibility. You may make mistakes (just like me), but they should be yours, not mine.

Fiona Mulvey2 days ago
Rita Levi Montalcini predicted all of this, long before 15 years ago.

Lorand Kedves1 minute ago (edited)
@Fiona Mulvey  Probably, but she had nothing to do with the actual process, and that 15+ years is only my work, not the ones I refer to. For example, see JCR Licklider: Libraries of the Future from 1964, the result of a two year state funded official research (please keep in mind, this was in deep cold war era when our current pop-science was impossible...) ;-)

This book is the exact, scientific forecast of the core features and our interaction with the internet, derived from the exponential growth of computing power and the amount of "transferrable" (objective, scientific) knowledge. Informatics used to be true science, not the playground of billionaires or hype-lords "surprised" by the inevitable side effects of their own business... :-) 

„... the "system" of man's development and use of knowledge is regenerative. If a strong effort is made to improve that system, then the early results will facilitate subsequent phases of the effort, and so on, progressively, in an exponential crescendo. On the other hand, if intellectual processes and their technological bases are neglected, then goals that could have been achieved will remain remote, and proponents of their achievement will find it difficult to disprove charges of irresponsibility and autism.”

That means (translating complex statements to "Twitter-English"): blinded by cheap marbles of global IT, we will not be able to look up. Here we are.

Fiona Mulvey1 hour ago
@Lorand Kedves I am a scientist too, specialising in human perception and attention. Again, Nobel prizewinner in medicine and physiology Rita Levi Montalcini predicted all this decades ago, and tried to do something about it in the declaration of human duties. You should have a read, I can translate the neuroscience to twitter English, as you call it, if you need, too. I don't even have a twitter account and personally prefer primary source in general, but she wrote in Italian. Don't assume the people you are talking to are morons who speak only twitter English, there are thousands of scientists working on the same topic for years and you might learn something new.

Lorand Kedves1 second ago
@Fiona Mulvey Don't take it personal, please. From one short sentence how could I detect your level of education? Although checked that your name exists on Google Scholar, but that is just a hope, not a unique identification... ;-) I also think that while chatting on YouTube, Twitter-English translation is important for the average audience (just see the responses to my texts here). 

However, I still hold my statement. I totally agree, many exceptional thinkers including scientists of many fields warned about the general collapse of communication, much more than I know about. However, the actual machinery that is now abused to the extreme against human sanity, is informatics. ... although we use "Computer Science" (which is a ridiculous simplification) or "Information Technology" (yet another), because when Ted Nelson coined that we should use this name just like physics or mathematics, it was already occupied by a company in the US, and this is what counts. Very funny and telling story... 

So, I have my heroes in other areas, like Neil Postman or Konrad Lorenz, etc. and have no issue to add Professor Montalcini to the list. But I still say that the major issue today is that "informatics" had forgotten its own scientists and replaced with cult leaders and businessmen. The number one problem is "information poisoning", and the creators of communication tools and infrastructure have clear motivation to make the situation worse. On the other hand, without the fundamental concepts of individual reasoning and communication, we have no chance to solve any problem. This is a perfect storm, and while respecting the warnings of many wise people, I expect the solution from the experts on the field that causes the problem. 

If you are interested and give an address, I can send you a short article with more details. Lund, HCI, artificial vision? Cool stuff, we may have a few common topics. Unfortunately, I dropped RG, academia.edu and other accounts together with my PhD when I could not find a single person in the AI community to discuss the first page of the Turing Test article... Maybe you would be the first one? :-)

2021. november 28., vasárnap

The Computer Revolution Hasn't Happened Yet!




9 hours ago
14 years passed but I would agree today, "the computer revolution hasn't happened yet". We have smart phones and tablets in the hands of millions of children but that does not seem to help them with learning, we see a pandemic in action but our response is far from optimal. Do I see it wrong, or is there an explanation at the Viewpoints Research Institute to the lack of progress, perhaps a long term strategy? Thank you.



Yoshiki Ohshima
8 hours ago
I think Alan gave talks with the same title (but quite different contents) ,and another one I am aware of is done in 1997 (https://tinlizzie.org/IA/index.php/Talks_by_Alan_Kay) so it is more like 24 years minimum (but the year 1980 has been brought up a few times so you could say over 40 years). The reason is multiple factors.. but this is an explanation by Alan himself: https://www.quora.com/At-OOPSLA-1997-Alan-Kay-gave-a-talk-titled-The-computer-revolution-hasnt-happened-yet-What-parts-have-materialized-thus-far-and-if-not-why-not Also note that once Alan said: "'The computer revolution has not happened yet' is a line we should keep saying even after it has happened.".



@Yoshiki Ohshima Ah, that caused my confusion! I remembered a much younger Alan talking about the same elements even with the same demo. So, let's date it further back, it only makes the problem worse and the request for and explanation and strategy more important.

I see strong similarity to the AI arena: long time, huge efforts, popularity - but only sci-fi movies, chatbots and lamentation about ethics. Back in the days when OpenAI had an open forum, I dared to ask the demigods if they had a clear definition of "intelligence" before creating its artificial form? Because without it, the whole research is a Texas Sharpshooter exercise. 
I don't see any reason to revoke this statement. To ask better questions, we must be more precise. If we talk about the Turing Test, read and understand the definition. If we praise chatbots, read and understand the warnings from the creator, Joseph Weizenbaum of the first one, ELIZA. If we refer to the laws of robotics, know that there are four and understand the very message Asimov tried to deliver with the 0th. And so on...
Based on such grounds, there is a scientific/engineering level definition of intelligence and a roadmap to create it. Without it... it's like waiting to the moonlanding "to happen". It never happens. It is created by the organized effort of a huge amount of highly trained and selected professionals.
The very reason informatics was defined by Vannevar Bush. And you know that.

As I see, we had a huge progress in the "tooling and penetration" of IT. Like, smart phones are way beyond the OLPC. OK, not all children have them, but look at those who do: TikTok "happened", but I don't think that's Alan's revolution... Should I list direct consequences from the Marvel Universe level "visions and heroes" to NSA and Cambridge Analytica; other global problems as side effects? 

Do you accept my statement that 1: the result is not the intended one, 2: the direction is not promising and 3: waiting for a magical upturn after 14/24/40 years is not a strategy?
You are the Viewpoints Research Institute, you are the professionals, I am just a random outsider on YouTube with some background and inconvenient questions. What is your point of view today about the root causes of this stall? What is your strategy?

Thank you.



Yoshiki deleted the second comment - I deleted the first one.

2021. október 20., szerda

The Myth of Artificial Intelligence - YouTube



My comment on YouTube
@scientious "I was not aware that the publicly available information was this far behind until I saw his book."

I also see a huge gap between not only the common but the academic understanding of the goals and even the fundamental concepts of AI. (Or maybe I still mourn my trashed 2.5 years of PhD research on this topic? :-D ) My favorite is the complete misunderstanding of the Turing Test. Not what we think we know about it but exactly how Turing defined it in the first paragraph of his article, COMPUTING MACHINERY AND INTELLIGENCE, 1950 

To clarify for the public: the Turing test does not answer the question if machines can think, or in other terms, can ever become intelligent. It replaces that question and the article explains why and how machines will inevitably get the capacity to fool a human over time. Which is true, especially now as we, humans lowered the bar into the mud... Just for example recall that political messages must be "relatable" on the level of 12 years of age (see also Idiocracy), or check the "smart chaos of social media" (see also, Cambridge Analytica). EPIC :-) :-( 

For me, any meaningful research towards AI must start with the question Turing replaced. Give a scientific definition of the term "machine" and "think". (Disclaimer: that's what I do ;-) ) Can you share some links to your research, btw?

The author removed my response two times from the conversation, even though it was not on the front page but under this discussion... Epic again.

2016. október 21., péntek

Engineering and Scientific Creativity

5 Things Everyone Should Know About Machine Learning And AI

I wanted to talk about 2 things everyone must understand about creativity before that.


Noa Zamstein
I guess to sharpen the issue, is the ability to express humor only a matter of how much data you have been exposed to over the years, or - if to be a bit philosophical - is there this extra "something" that for some reason cannot be just a mere extrapolation of learned lessons? Why are some people astute or funny and can blurt out this brilliant concoction of ideas from all walks of life whose sum is a witty conclusion that we all nod to and can understand but would never think about saying at the right moment in the right context?


Noa Zamstein Why do you want to teach computers something that human beings seems to have lost while working with them? Human and computer intelligence is just like flesh and bones: not to be mixed or replaced with each other, it's not a race but can be a symbiosis. Otherwise, we die. Not because AI would kill us - but because we are not wise enough to live with our power. Just look around. We don't have time for things like "analyzing humor"... we right now are a lethal tumor. ;-) (to sharpen the issue)


Nikola Ivanov, PMP
Lorand Kedves I think you make a good point, but I would dispute that humanity has lost creativity because they are working with computers. The act of building computers themselves and associated software and applications is a creative process. Computers are another medium for expressing ideas and feelings. Just look at all digital art and entertainment, blogs, etc.


Nikola Ivanov, I also know the marketing stuff, but coding and creativity are different animals (and I have spent the last 20+ years with solving "impossible" design and coding tasks).

"In science if you know what you are doing you should not be doing it. In engineering if you do not know what you are doing you should not be doing it." (Richard Hamming (2005) The Art of Doing Science and Engineering)

To me, creativity is science, but IT became business, and business loves engineering, not science. Do you really know that all the "new" inventions like the internet, OOP, tablet, ... came from ARPA (which then became DARPA) or PARC around 1970? The past decades were fantastic in engineering(!): reducing the size, consumption and increasing capacity and speed that made those old inventions physically possible.
Sure, Jobs, Gates, Zuckerberg, Musk, etc. should be considered "creative" - sorry, that place is occupied by Lovelace, Neumann, Turing, Shannon, Hamilton or Charles Simonyi - but it takes a lot of time and effort to understand what they did (and today we are too busy chasing profit and fame, have no time to learn and understand them)...
I did watch the growth of the digital art, etc., as an IT expert and thinker - but all I see is quantity and marketing beating quality.

Sorry to be this short and rigid. I have written hundreds of pages about this, and I see no reason in trying to dispute, I am hopelessly bad at it. I rather recommend reading Technopoly (Neil Postman, 1992!), Civilization of the Spectacle (Mario Vargas Llosa), or watch this brilliant lecture from Bret Victor: http://worrydream.com/dbx/ to get a glimpse of what I try to talk about.


Nikola Ivanov, PMP
Lorand Kedves I get your point and will check out Technopoly. It appears that you are dividing creativity into two separate categories by tying it either to science or to "marketing" and "profit." I can see how the two types of creativity can be different, but to argue that creativity just does not exist anymore is false. Sure, there is a lot of junk our there that people pay money for, but there is also a lot of innovative, creative, clever, elegant, and useful ideas and products.


Nikola Ivanov, I think we have no argument here: I wrote "human beings seem to have lost". I mean: not enough for keeping our civilization alive: a 1m jump to cross a 2m gap equals to zero considering the result... ;-)

My original point was that we try to push fundamentally human values to machines (from humor to ethics), while we measure humans more quantitative and less qualitative ways. Naturally, because quantitative improvements are easier to plan, so this method is dominant in a business oriented environment. Qualitative, unplanned, wild changes (that is "creativity" in my world) "should be done by someone else". (the "20% free time at creative companies" is actually mind farming, not much more)

See also: “Never invest in a business you cannot understand.” (Warren Buffet) - sorry, then who will pay ANYONE trying to bring up new ideas (and naturally fails constantly for years)? Or the Candle problem: https://en.wikipedia.org/wiki/Candle_problem Or a favorite joke of mine (sorry for the weak translation):

The king spotted a shabby guy in his palace, and asked his counselor
- Who is this beggar?
- Your astronomer, sire.
- What does he do?
- He calculates the routes of your ships, the timing in farming, etc. Your empire depends on him.
- But why does he look like that?
- We pay them 5 pounds.
- No way! Give him 100!
- Sire, this is the only way to have a REAL astronomer... If we gave the royal astronomer 100 pounds, he would soon be replaced with that stupid son-in-law of your treasurer...
:-)


Nikola Ivanov, to make the division clearer.

There is creativity in finding the best possible answer to a complex, yet unanswered question. That is engineering and it has great importance. This is what we should thank our technological improvements in the past few decades.

And there is the creativity in finding a truly important question among the myriads of possible questions. That is science, and requires fundamentally different approach along the whole process.

We know a lot about how to support engineering creativity - but scientific creativity seems to be out of sight, thanks to its direct opposition to what we call "economy" or "society" today.

An example: engineering creativity is looking for an answer how to handle the garbage continents in the oceans. Scientific creativity asks: "why the f**k do we CREATE garbage???" And stops, because this is a much better and fundamental question, the job is done, the rest is engineering. Business guys say "this question cannot be asked", tap the head and kick science out of the way of making profit. Science leaves by saying "Have a nice funeral, guys. Don't forget the fireworks..."

Good morning Vietnam! :-D

2016. május 16., hétfő

Knocking on Heaven's Door :-D

At OpenAI gym.

May 14 08:31
I would ask you a silly question: what is your definition of "intelligence"? No need to give links to AI levels or algorithms, I have been on the field for 20 years. I mean "intelligence", without the artificial part, "A" is the second question after defining "I". At least to me :-)

May 14 21:47
@JKCooper2 @yankov The popcorn is a good idea, I tend to write too much, trying to stay short.

@daly @gdb First question: what do we examine? The actions (black box model) or the structure (white box)?

If it's about actions (like playing go or passing Turing test), intelligence is about "motivated interaction" with a specific environment (and: an inspector who can understand this motivation!). In this way even a safety valve is "intelligent" because it has a motivation and controls a system: it is "able to accomplish a goal". Or a brake control system in a vehicle, a workflow engine or a rule based expert system.

However, white box approach: how it works is more promising. At least it enforces cleaning foggy terms like "learn", "quicker", or how we should deal with "knowledge representation", especially if we want to extract or share it.

In this way, I have starter levels like:
  • direct programmed reactions to input by a fixed algorithm;
  • validates inputs and self states, may react differently to the same input.
So far it's fine with typing code. But you need tricky architecture to continue:
  • adapts to the environment by changing the parameters of its own components;
  • adapts by changing its configuration (initiating, reorganizing, removing worker components).
So far it's okay, my framework can handle such things. However, the interesting parts come here:
  • monitors and evaluates its own operation (decisions, optimization);
  • adapts by changing its operation (writes own code);
  • adapts by changing its goals (what does "goal" mean to a machine?)
At least, for me artificial intelligence is not about the code that a human writes, but an architecture that later can change itself - and then a way of "coding" that can change itself. I did not see things related to this layer (perhaps I was too shallow), this is why I asked.

May 16 06:10
@gdb Okay, it seems that my short QnA does not worth serious attention here. I have quite long experience with cognitive dissonance, so just a short closing note.

Do you know the Tower of Babel story, how God stopped us to reach the sky? He gave us multiple languages so that we could not cooperate anymore. With OpenHI ;-) this story may resemble the myriads of programming languages, libraries and tools - for the same, relatively small set of tasks, being here for decades. (I have been designing systems and programming for decades to get the pain of it - see Bret Victor for more.)

So my point here: Artificial intelligence is not about algorithms, python codes, libraries, wrappers, etc. that YOU write and talk about. All that is temporal. (And by the way, AI is NOT for replacing human adults, like Einstein, Gandhi, Neumann or Buddha. It is only better than us today: dreaming children playing with a gun. hmm... lots of guns.) However...

When you start looking at your best codes like they should have been generated. When you have an environment that holds a significant portion of what you know about programming. When it generates part of its own source code from that knowledge to run (and you can kill it by a bad idea). When you realize that your current understanding is actually the result of using this thing, and that you can't follow what it is doing because you have a human brain, even though you wrote every single line of code. Because its ability is not the code, but the architecture you can build but can't keep in your brain and use it as fast and perfect as a machine.

By the way, you actually create a mind map to organize your own mind! How about a mind map that does what you put in there? An interactive mind map that you use to learn what you need to create an interactive mind map? Not a master-slave relationship, but cooperation with an equal partner with really different abilities. I think this is when you STARTED working on AI, because... "Hey! I'm no one's messenger boy. All right? I'm a delivery boy." (Shrek)

Sorry for being an ogre. Have fun!

2016. március 13., vasárnap

AI Go Live :-)

LinkedIn - Dave Aron: Does a Computer Beating The World Go Champion Matter?

The short answer is: yes, very much. Firstly, it is a kind of a benchmark as to how far artificial intelligence is along. Go is a very difficult game, and a game of perfect information – there is no luck involved. Secondly, DeepMind have pushed some of the boundaries and techniques in intelligent decision making and optimization.

But third, and most intriguing to me, is that I believe in the future, we may solve tough real world problems by encoding them as game positions.

Do you know about The Treachery of Images?

This is not a pipe.


This is exactly the same: there is a fundamental difference between Go and Life.

In Go you know all the rules, and play in an isolated environment.

In Life, you don't: none of the above preconditions apply!

This is what the total history of science is about: realizing that we were wrong, find new rules (a bit deeper understanding of the fundamental laws of nature), and step forward as a civilization by using them. And then realize where we were wrong, and do it again.

So showing that now we have enough performance to run a massive algorithm that finds, evaluates and optimizes billions of actions among known rules better than a human player is, well, kinda nice, and yes, it is important because now you can calculate the trajectory of a rocket or satellite with more precision than legions of human computers with pencil and paper.



But to solve (and create...) problems, it is still and always us. Human beings.

However, to solve the problems that WE create, we must return to the level we named our kind about: Homo "Sapiens". Wise, not intelligent. And here a game playing toaster does not help much, only distracts us from our own tasks and responsibility.  

An important tool, but definitely not the key to the future.

2015. november 18., szerda

Dust state

https://www.linkedin.com/pulse/critical-new-role-system-simplification-specialist-roger-sessions

Roger, I could not agree more, with one note: the definition of "mathematically". I have not the best feeling about that term, ending in complex expressions with weird characters. It is quite easy to frighten away programmers like me with that, even though I know that with proper handling, that is the best way to express something. And even though at one of my workplaces, they always told me "this is not university, it has to work!" But they gave me time to make it work, and used it. Good old days.

I think the situation is similar to where we were before John von Neumann.

At that time, reacting machines were made as custom logical circuits, and they had to fight for every second with the slow and thirsty hardware. Then he came with his brilliant essay saying "don't solve the problem as a whole, but separate the components that are required for a reliable IT solution, make hardware for that, and instruct them to solve the problem."

That must have looked stupid for all engineers: make it even slower and more complicated? But their actual designs simply could NOT be verified, optimized, not to mention being built into chips, they had to build everything with their hardware tools, and their knowledge was buried in the metal. Solving real problems with direct building was so complex itself that higher level constructs were simply over the horizon. Simply put: the complete approach was not industrial.

Replace the "hardware" with "big software components, toolkits, platforms", and you have the IT world today.

Everyone hacks together big "systems" from big and custom components, there is nothing that you could rely on because all is done individually by someone, upon (and locked to) constantly moving tools, chasing constantly moving targets. There is no reusable terminology and actually working toolchain that you can use to express your needs on a higher level, above being locked to your tooling.

That part should encapsulate the complexity of atomic segments of problem solving, and offer the solution in a global, uniform, usable and efficient way. Nonsense. Unless you spend 20 years on learning to see the structures instead of the solutions, and for example, separate the needed code from configurations and repetitions. Like me. Unfortunately but naturally, this separates me from "the rest of the world".

I can show you a prototype of a system that contains its own definition, and much of my knowledge about programming. It can generate its own Java sources and projects, but is "theoretically" independent from the language itself. I had to create it because its design is so complex that the only way to make it work is to let it actually execute its own configuration.

Of course, it is not different from the Wright brothers' "thing". It is just a bunch of hacks compared to the super-sophisticated trains that we use to "solve problems" today. Okay, okay, it can fly, but who cares? Or who would waste time to understand its weirdness? How does it carry anything? This must only be a toy, huh?

The first "real" question is: how do you sell it? Well, if you understand what it means, all big names would die to have it and lock it in a box, because it simply changes the IT business. The next is: does this ever solve a "real" problem? Come on, your "real problems" are BORING, and exist mostly because of short vision, unforced design errors and mindless cult-coding... ("... you are so rude, Sherlock" ;-) )

So, regardless of the trust I received recently, it's on the waiting list. I now write a frontend to a webshop for the monthly wages and watch "transit gloria mundi".

2015. június 18., csütörtök

Abstract thinking

Abstract thinking
As an Architect, abstract thinking is inevitable. That being the case, What principles to use during the phase of abstract thinking in a project, Techniques to grasp the abstracteness, identify the abstractness, developing concepts and ways to test if the quality of abstract thinking is good?





My 2 cents...

You are not born with abstract thinking, you learn it as you learn your language, then logical thinking, finding similarities, separate relevant attributes from the irrelevant ones.
You can apply this technique to programming. First you see tasks, but as you solve many of them, perhaps in various environments, languages and requirements, similarities emerge. You start "feeling", then seeing a model behind the processes. It takes time which can be boosted by analyzing high quality codes, reading good books (like Design Patterns), but the most you learn from refactoring your own code. Later, you do most of the refactor in your head, before typing any code. You may also like this: http://hajnalvilag.blogspot.hu/2014/10/coping-with-infinity-digitization.html

The picture that always helps me is a gang of dwarves (that is, your classes/components). They are happy when they have only one task to do (single responsibility), and a very few commands to deal with (narrow API). They don't want others to talk into how they work, neither to do different things. Split your task among this gang, the happier they are, the better your design is. At first, it is very likely to be weak, but will improve on each task you do. Too much talk means you should move the boundaries in the responsibility map; too complex code means you may split it to multiple dwarves.

Avoid duplicates, always consider how much these dwarves must "know" about each other. For example, instead of setting some parameters directly from outside in a service, consider offering predefined "work modes" from which the user component can choose. It is an extra cost in the beginning, but a great gain when you have to extend/refactor the service.

A good design makes you asking good questions before coding, contains segments that are not required right now but would be nice improvements, which also prepares you for many of the late change requests.





Andrea Baruzzo
...
I think that the term abstraction can have a wider meaning, that of abstract data type (in the compur science speaking) or that of the concepts of a specific problem domain.

We need to clarify the acceptation of the term before to start discussion.



Krishnan Ramanujam
Agree with Andrea. Thanks..




Nice try, Krishnan, but you are the asker, you should decide on which level you talk. The answer is 42, but what was the question? :-)

By the way, this is again a general/special difference: the "abstract" term is in general a way how we handle similarities and differences, then a specialization is how we translate this concept to programming terms, and finally, to actual programming construct in a language. If I translate this previous sentence to system design, the process is the same: "ideas" -> "abstract programming models" (like UMLs, charts, etc) -> "language constructs" (classes, interfaces, etc).



Krishnan Ramanujam
Say in a domain, we envision a possibility that can solve a problem in that domain. If we are given a blank slate, what are the questions to ask that will help us develop relevant concepts, what to consider so that a concept is developed rather than a specialized solution is developed even in the first pass of developing something concrete. Thanks..




Well, that is related to the "abstract" approach, I guess. And if there was a silver bullet, everyone would use it. Sorry, there is none (yet... on the other screen I am watching my own MetaEditor, trying to find out what is the exact, working definition of the meta components like Type, Attribute, etc., and that's VERY frustrating so makes me procrastinate like chatting here :-) )

So for now, the answer is not clear. You "should" focus on creating narrow APIs and independent components - but you will surely fail creating them by just mere guessing.

If you have a blank slate and low courage, then
  • 0: write a document that describes your system WITHOUT considering existing tools and solutions: why do you make it at all, what are the services you provide, what are the problems with providing them (speed, performance, etc), how do you plan to cope with them,
  • 1: check what others do on the field, and steal anything that looks usable,
  • 2: check against your concept, what is missing, what is too much in what you have stolen,
  • 3: hack together an ugly but fast first implementation,
  • 4: check it again and steal anything that looks usable,
  • 5: create a better implementation (cleaner structure, more details),
  • 6: goto 4 or 1, repeat.
In this way, you will either go bankrupt, because you have no time and budget to finish the system, or create an acceptable solution to your stakeholder, with which none of the parties are fully satisfied. Sorry, no silver bullet, almost all development project end in this state by the way, so don't be ashamed.

But you learn, and the process gets faster on each iteration. Watch the elder guys playing the same game, learn from their approach, success and mistakes, and let them fail until nobody else stands in front of you and you have to put your own bet on the table.

Good luck! :-)