A következő címkéjű bejegyzések mutatása: programozás. Összes bejegyzés megjelenítése
A következő címkéjű bejegyzések mutatása: programozás. Összes bejegyzés megjelenítése

2025. május 10., szombat

What Is The Future Of Programming Languages?



The questions are excellent, the answers are "state of the art", the latter is not a compliment in this case. Here is a different take on the graph part.

  1. You have two fundamentally different ways to transfer and curate knowledge, A: storytelling (very human, imprecise) or B: knowledge graph building (hard for a human, as precise as can be). 👉 JCR Licklider, Libraries of the Future (1965, book).
  2. STEM knowledge is always B: a graph. When you have a problem in physics, biology, math, medicine, ... it's NOT about how you sing it or what language you use, but to build a precise network of property pages filled with data and linked to each other. The very terms (labels of data and links) are also graphs (DSLs). Information systems are graphs, too. In the computer's memory, you have flowcharts of the algorithms and you use the memory to hold the content of those property sheets. 👉 Ivan Sutherland Sketchpad (YouTube)
  3. EVERY program is a combination of a DSL set (the "meta" layer of classes, members, functions and their parameters) and a bunch of stories (the code of the functions and procedures), quoting Bob Martin: the software is assignment statements, if statements and while loops 👉 Future of Programming (YouTube).
  4. The real problem is that the text-based tools make us focus on the storytelling, we only see the big list of features or use cases, instead of the DSLs that allow us to describe and solve the atomic problems (modularity, KISS, DRY, SRP. ...). We have already solved every possible atomic problems literally millions of times, but repeating them every time (copy-paste or in "modern cases", LLMs🤦‍♂) in the endless possible combinations, and that grows every day. 👉 Alan Kay's Power of Simplicity (YouTube)
  5. Introducing new programming languages that do the same has one effect: it even erodes the "burden" of cumulated human experience and the codebase, start it all over again, not solving the fundamental issue.

Problem: Information systems are graphs. Storytelling is not the right way to interact with graphs. The blamed imprecision is manageable "human error" in the case of graphs but inevitable fatal block in text-based programming. Teaser 👉 Bret Victor, Future of Programming (YouTube); hard-core answer 👉 Douglas Engelbart: Augmenting Human Intellect (report)

Solution: STEM languages are graphs (DSLs). THE future programming language is the DSL of information systems. The same that we have in physics, mathematics, biology, ...

Question: has anyone read this far? 😉

---[ discussion under a comment ]---

CallousCoder
Your behaviour is like the old horse and cart people against the automobile. It's nonsensical, the technology is here to stay. So either adopt it or you'll go extinct. You know that good developers are terrible managers, right? ;) Also I don't get it, what the resistance is. Whether you ask a junior or medior to implement something or you ask an LLM. It's no different other than that the LLM just does it and doesn't nag especially after the 2nd or 3rd iteration ;)

CallousCoder
“bro” is 52 years old and didn’t take philosophy but EE and CS.

lkedves
Age is just a number (happens to be the same...) Check the Mother of All Demos, that was real technology behind the Apollo program, while this chatbot AI is just another stock market bubble. Side note: before the previous AI winter, we won Comdex '99 with a data mining / AI tool. Back then people could read the first paragraph of Turing's article, the definition of "the test" instead of trying to implement cartoon dreams... (including a Nobel prize winner psychologist) 

But you got the point with "Modern software is a disease!" LLMs learn from their sources, kids copy-paste them to real software, LLM learns from them again. Quantity goes up, quality goes down. LLM companies use human slaves to avoid stupid mistakes in everyday tasks after the first flops. Regardless of we accept this as a solution, who will censor generated codes? 

Dead end.

cyberfunk3793
AI is obviously going to fix data races and buffer overflows and every other type of bugs you can think of. You don't understand what is coming if you think it's just hype. I don't know if it will be 5 years or 50, but at some point humans will only be describing (in human language) what they want the program to do and reviewing the code that is produced. Currently AI is already extremely helpful but still makes a lot mistakes. These mistakes will get more and more rare and the ability of AI to program will far exceed humans just like computers beat us at chess.

TCMx3
Chess engines did not need AI to curbstomp us at chess. Non-ML based engines with simple table-bases for endings were already some 700 points stronger than the best humans. Sounds like you don't actually know very much about chess engines lmao.

CallousCoder
btw playing chess with an LLM is a hilarious experience. If it looses it brings back pieces from the dead or just “portals” them into safety.

lkedves
[retry, I promise I leave if disappears again]

You may have missed so I repeat. We won Comdex '99 with a data mining / AI tool (and there is nothing new on this field except the exponential growth of the hardware). Since then I have worked on refining knowledge graph management in information systems under every single project I touched, often delivering "impossible missions". I work together with the machine because I follow a different resolution of AI: Augmenting Intellect (Douglas Engelbart) on systems that are of course smarter than me (and generate part of their own codes for years from these graphs). Right now at the national AI lab in a university applied research project that I will not try to explain here.
You find some of my conclusions with references to sources in my comment added to this video (11th May). 

I know the pioneers who predicted and warned about what we have today (recommended reading: Tools For Thought by Howard Rheingold, you find the whole book online). One of them is Alan Turing, who asked people not to call the UTM a "thinking machine" and wrote an article in Mind: A Quarterly Review of Psychology and Philosophy about the dangers of making such claims without proper definitions. Poor man never thought that in a few decades "IT folks" would think this was an aim. Or Joseph Weizenbaum, the guy who wrote the first chatbot, ELIZA. 

I know why your dream will never happen because I know informatics (its original meaning, not the business model Gates invented) was against this fairy tale. LLMs are just try to prove that old story from quantum mechanics, that infinite monkeys in infinite time will surely type in the whole Hamlet. The problem is that we don't have infinite time and resources, and the goal is not repeating but write the next Hamlet. Those who initiated informatics, made this clear. Start with Vannevar Bush: As We May Think, 1945. 

@TCMx3 , @CallousCoder - thanks for your answers... 🙏 Another excellent example - in chess, you have absolute rules.

In life, we know that all the laws we can invent are wrong (Incompleteness theorem) and thinking means improving the rules while solving problems and taking the responsibility of all errors. The ultimate example is the Apollo program with Engelbart's NLS in the background, that's how THEY went to the moon. We go to the plaza to watch the next Marvel story in 4D, now with the help of genAI. If we go with the question of predicting the next 50 years, look up "Charly Gordon Algernon 1968" here on YouTube.

---[ This answer "disappeared" for the second time so I left the place ]---

2025. március 20., csütörtök

Immersive Technologies Policy Primer - reaction

On LinkedIn



This looks like a solid overview of the current "state of the art". What I don't see is the background, 𝐭𝐡𝐞 𝐫𝐞𝐚𝐥 𝐠𝐢𝐚𝐧𝐭𝐬 𝐨𝐧 𝐰𝐡𝐨𝐬𝐞 𝐬𝐡𝐨𝐮𝐥𝐝𝐞𝐫𝐬 𝐰𝐞 𝐚𝐥𝐥 𝐚𝐫𝐞 𝐬𝐭𝐚𝐧𝐝𝐢𝐧𝐠 (and were afraid to look down so now we fall like a stone, just as predicted -> https://youtu.be/KZqsWGtdqiA?t=102 ).

When discussing the social effects of communication technologies, where is the reference to Neil Postman? -> neilpostman.org
https://youtu.be/QqxgCoHv_aE

When talking about education, where is PLATO?
https://youtu.be/THoxsBw-UmM

When discussing AR/VR, where is Ivan Sutherland?
https://youtu.be/AFqXGxKsM3w
Or at least, Alan Kay (among others, the real inventor of the tablet)...
https://youtu.be/pUoBSC3uoeo?t=5061

When thinking about informatics in general, where is Douglas Engelbart?
https://youtu.be/O77mweZ8-RQ?t=22
Or Ted Nelson?
https://youtu.be/KdnGPQaICjk

...

---

"Primer"... 🧐

“Those who cannot remember the past are condemned to repeat it.”
George Santayana

---
... and I guess this date should be 1994:
[3] Paul Milgram and Fumio Kishino (1884)

2025. február 5., szerda

The Experience of Being Wrong

How could I be THIS STUPID???

Now, please stop reading and remember when you asked this, not just lightly but with the strange mix of real anger and shame.


...


That was the last time you learned something really important and this is the only way to it. It happened to me yesterday. At age 52 I think this is a very positive feedback: I can (very hardly but still) lower my ego and learn. Here is the story.

I have been repeating for years in every context that JCR Licklider separated transferrable and non-transferrable knowledge, and a root cause of today's mess in IT (and consequently, everywhere) is the fact that we forgot this. Banging my chest like a gorilla, like here...

But yesterday, as my young colleague is creating a proper scientific publication, we started looking for the exact reference. To my greatest surprise, I did not find it. In desperation, I started reading Libraries of the Future again, and realized (thankfully and ironically on page 2!) that...


Licklider never wrote that.

Here is the actual quote:

We delimited the scope of the study, almost at the outset, to functions, classes of information, and domains of knowledge in which the items of basic interest are not the print or paper, and not the words and sentences themselves —but the facts, concepts, principles, and ideas that lie behind the visible and tangible aspects of documents. The criterion question for the delimitation was: "Can it be rephrased without significant loss?" Thus we delimited the scope to include only "transformable information." Works of art are clearly beyond that scope, for they suffer even from reproduction. Works of literature are beyond it also, though not as far. Within the scope lie secondary parts of art and literature, most of history, medicine, and law, and almost all of science, technology, and the records of business and government.

He talks about "transformable information", not "transferrable knowledge".


What happened? Had I forgotten to read???

No, but I was not able to at that moment. This paragraph held a key to a question of computerized knowledge management I struggled with for decades, literally. When it hit me, my mind was blown immediately and started restructuring itself. I followed, remembered, and kept quoting my own revelation instead of the text that I thought I was reading. 


But why?

Knowledge in our minds is always a network: some attributes of and relations between "things". To store or transfer our knowledge of a topic, we "export" the related part of this network in a presentation, text, figures, pictures, videos. Other people will try to integrate this content with their existing knowledge. Here comes the trick: for those who can do this without changing anything in their minds, this was not "information" because information is only the part that you did not know and could not figure out from your existing knowledge.

At the first time I could not integrate Licklider's original message with my existing knowledge, it only triggered a change that took a long time. Now, when I revisited this paragraph, it was new again but now I could actually read it and integrate with my current knowledge. Fun fact: the word "respect" does not mean "obey" or "accept" but re-specto: examine it again.


Real information is like good chilli: it burns twice.


So, how do I read the message now?

This is a simple way to tell the difference between transformable and non-transformable information. I quoted the rest correctly: informatics (the "libraries of the future") should work only with transformable information.

  • Transformable means you can say it hundreds of ways, the meaning will be the same. You focus on the knowledge graph in your head and try to build exactly the same in the audience: a physical phenomenon or a medical treatment. 
  • Non-transformable information focuses on the message itself and the feelings created by it (not less important but totally different). With different tone, wording, or face, the message and the effect significantly changes.


A less nerdy example

I think 99% of modern pop music is not even information: repeats the same message about a boy, girl, love, hate, etc. that the audience is already familiar. (hashtag metoo?)

But the Sound of Silence is a perfect example of non-transformable information: I already knew the original song but this presentation by Disturbed delivered the message (which happens to be in close relation with the topic of this post).


[This post can be a pair of my more formal article, The Science of Being Wrong (a possible definition of informatics) as here I defined "infonauts" as experts in being wrong...]

2025. január 14., kedd

Responsibility in "Mainstream IT" vs "Golden age informatics"

Charles Hoffman

I have been saying for years and years that more business professionals and liberal arts majors should be paying attention to artificial intelligence. Let me ask a question. When, or if, the tools that these professional computer scientists create go terribly wrong, should the computer scientists be held accountable in any way? If someone relies on these tools based on something some marketing campaign for artificial intelligence proclaimed, who will be held responsible? Buyer beware?

Humans in the AI Loop. AI in the Human Loop. Humans in Control
There is one, similarly dangerous aspect of this question. Assuming that there are no "IT people" who are (with the necessary formal education, knowledge and experience, much more) worried about this situation thus outsiders should enforce discipline on us.

Listen to Bob Martin not only pointing out the core issues but giving an explanation and possible cure for them as well. The problem is that this is too technical for the outsiders and absolutely not popular for the vast majority of self-proclaimed IT people who happened to get old and established without ever receiving proper education. So, they teach the next generation of their cargo cults, blockchain or mainstream AI being the newest ones. [edit: ouch, forgot about the IT cult leaders who first got insanely rich, then start "changing the world for the better", making fame along the way and attract their followers to continue their "heritage"... 🤦‍♂️ ]
https://youtu.be/ecIWPzGEbFc?t=3057

I see nothing special in IT. We live in the predicted global Idiocracy and IT is not immune to it.
Thanks for the video, Future of Programming. That is exactly the sort of thing I am talking about.


Lorand Kedves

Yes, you are "talking about" it while I can list you goals and names of the true IT pioneers, the best minds of the planet. They knew that any technology is exactly as dangerous as beneficial. The difference is that others change what you can DO in the real world, informatics changes what you SEE and THINK of it! Today we handle their warnings as a damned bucket list and of course, that is ignored as a CS PhD research topic.
https://bit.ly/montru_ScienceWrong

The roots were cut when IT became a for-profit venture funded by general business (M$) and rich daydreamers (Apple). I think you will like the ultimate arts person, Neil Postman, trying to educate Apple ("think different" 🤦‍♂️) folks in 1993...
https://youtu.be/QqxgCoHv_aE

IT is the "One Ring" (ref Bob Martin: you can't do anything without "us"). I wanted to use it in direct resource management (1) or education (2) but was ignored. Working in finance is like carrying that ring through Mordor but at least some people listen here and can even find "established" critics like Prof. Richard Murphy (3).
(1) https://bit.ly/montru_OpenSourceCivilization
(2) https://bit.ly/lkedves_Studiolo
(3) https://youtu.be/k5Yo3Y_SMow


Charles Hoffman

This is all interesting. One can compare this to things like the creation of "high fructuous corn syrup" and it effect on the food industry and people's health, mining techniques that destroy the environment, the way healthcare is practiced in the United States, the way the pharmaceuticals industry works to get people to pay pills for the rest of their lives.
Only if you ignore the other side of the coin. Following your analogy, statistically speaking the goods you find in a pharmacy are either useless or outright dangerous, even lethal - yet, we need pharmacies and have been cured by the drugs they sell.

How comes?

Although a pharmacy looks like a shop, it MUST NOT give you what you ask, only the drugs your doctor prescribed after a careful examination, regardless of the money you offer. Theoretically... 🙁 But today we try to operate the pharmacy just like a bakery or a candy store: want to get more profit by giving you whatever you ask, even create marketing campaign, etc. (like the rest of the healthcare system btw.)

So, do you "rightfully" blame the pharmacy for poisoning and killing people?

Yes AND no. But the solution is not that "worried, responsible outsiders" flock in to the pharmacy and try to regulate it by their personal experiences or the color of the boxes. Instead, they should support pharmacists return to their role and rebuild the counter between them and the customers. And in the long run, realign the "healthcare system" to the meaning of this word...

Now, replace "health" with "knowledge" and got informatics.

Does this sound interesting?
Sign me up! Informatics and cybernetics make complete sense to me. What I don't understand is why I don't "see" them in software development.
Since the 20th century, mankind is a planetary species: science, communication, manufacturing, wars. Thinkers knew that civilization is not a thing but an often unpleasant process of making a peaceful, educated, cooperative homo "sapiens" from each "erectus" kid. The new power needs a "global brain", a transparent cooperation of "knowledge workers" to control it.

They did create an information system that organized 400,000 members solving one, impossible, objective goal - the Apollo program. An icon is Douglas Engelbart.
Introduction (1995): https://youtu.be/O77mweZ8-RQ
Eulogy (2013): https://youtu.be/yMjPqr1s-cg

However, the world population was (is) not ready. They prefer separating "them" from "us", hate the hardship of learning and choose the cheap illusion of knowledge by repeating hollow cliches. Add the dream of becoming rich and famous, let them use the infrastructure created above and you get the current Idiocracy. An icon is Elon Musk.
Prediction (1959): https://youtu.be/KZqsWGtdqiA?t=101

You don't "see" informatics as I talk about it because it was lost since 1973.
https://youtu.be/8pTEmbeENF4?t=1741

Rebels may pay with their lives like Aaron Swartz.
https://youtu.be/9vz06QO3UkQ
Thanks for all this information. Now I have renewed motivation. I am now doing this for Doug. Building on his shoulders.

As a bridge person between accounting and IT, you can do more.
- BE AWARE that 1945-1972 was the golden age and Douglas Engelbart represents that "state of the art".
- DEMAND anyone claiming to be an IT person to demonstrate the same moral and professional attitude.
- DON'T ACCEPT less from "us".

"Building on his shoulders" is another thing.

Here is his analysis (1962) behind the Mother of All Demos. It has one key paragraph ignored even by his followers.
https://bit.ly/Engelbart_AI

It relates to Ted Nelson's Xanadu and ZigZag (document and graph DB vision). Combined with Chomsky's research it shows a gap in the proof of Godel's Incompleteness Theorem. That is the key to Turing's true challenge: define "machine" and "thinking". The Neumann Architecture CAN handle that, while the Harward is a dead end street. Conclusion: informatics is the necessary and sufficient doctrine of AGI as Augmenting Global Intellect, everything else is garbage.

This paragraph costs a lifetime and is worth it.

Meanwhile, our civilization is literally committing suicide and you are right: mainstream IT is part of the problem. About the necessary paradigm shift, here is another message from 1973:
https://youtu.be/WjR6nHhc6Rg

2024. február 3., szombat

LinkedIn - Apple Vision Pro



The Apple Vision Pro is mind blowing in many ways and signals an important inflection point in the industry. But there is also a lack of clarity in how this all comes together in devices that we'll want to take out into the world and use on a daily basis. I call it the "messy middle." Camera/screen based MR/AR devices are great ways to preview the future, to test and learn, and take us toward the future devices that will be a part of our daily lives in a big way. My plea to the industry-- let's not lose sight of the ultimate goal: devices that can connect us to the real-world and people around us and make our experiece as human beings out in the world richer and better. I wrote a bit about how we view the future of AR here: https://lnkd.in/gquivyQn.


---

Lorand Kedves

I see a philosophical difference manifested in AR hardware. Can you see through the device, or does the middle-man block your eyes with its screens and transfer the view from its cameras?
I think I understand why Apple joined to the heavy-weight class as eventually, it will win there with its experience, momentum and capacity. But is that the right way? Should "augment" really mean separate, remix and project?
I don't think so. A proper 3D augmentation over a directly visible environment is the future I would vote for. I don't want anyone to "immerse" in an artificial world, deal with motion sickness, bump into objects on a software glitch. Rather let them see the real world but spice it with a modest bubble of additional knowledge.


Bill Wallace

I don't think Apple is saying "This" is how AR should be done. They compiled a tech stack that was fit for purpose for a strategy and added in pass through AR because they could. It's like saying GM shouldn't make passenger vehicles because we need pick up trucks. Different devices for different jobs.
In terms of the AVP, it is a reasonable solution using the critical mass of the tech available today. When see-through optical has enough tech in place to build enough useful features for a class of usage, then maybe they will play there also.



Lorand Kedves


So many topics, so small space…

AVP/Apple: They desperately want a “new iPhone moment”. This is not a weird connector, a bad keyboard or a fanless machine that cooks itself. They will not let this go easily.

AVP itself: this device has no “job”, it is a general consumer device. We remember how mobile phones moved from socially rejected awkward slabs to critical part of our individual and social life, unconsciously redefining “presence”.

AVP-like job: the head mounted display in fighter planes. The key reference frame is the plane and its sensors, the HMD must know its position relative to the plane. Not a random street.

AR in general: Damocles (1968) appeared right after Sketchpad (1963). Visual computing and AR is here from day one of informatics but with no "real job". I happen to have one: my system manages all data in a dynamic semantic graph (now testing on the full SEC EDGAR export on my laptop 🙂 ).

Another use case: the AR glass is a dumb, see-through screen and motion capture dots. People go into a conference room with lots of cameras. The 3D interactive hologram in the middle is projected for all participants. Cheap, safe, can be done today. Only the profit margin is low.

Where am I wrong?


Bill Wallace

I actually had a bit of a hard time following all of your thoughts, but context can be hard in this minimal channel. I enjoy exploring new perspectives but I can't comment on much.
'They desperately want a “new iPhone moment”'
I agree with that. I don't think this is it. I love that they are driving the market but I don't think they have a leading solution yet. They may make a market but noting like iPhone.
iPhone sold 1m plus in it's first year and 10m plus in year 2.
The AVP projection is 600k year 1. They sold out 40k in a day, now preordered out to about 80k total. I think everything after 200k is going to be a slog. Only time will tell. It won't be a flop but won't be a killer device either. Or this post might embarrass me in the future.



Lorand Kedves

Bill Wallace Yeah. So many "communication platforms" but they all good for venting and cheering.
https://www.youtube.com/watch?v=RW-kAqAjMNc
And no place for a meaningful dialog, that feels so weird against the sounds of silence...
https://www.youtube.com/watch?v=u9Dg-g7t2l4

Anyways. The listed aspects are those we should talk about to evaluate AR as a technology, medium or social phenomenon. But Apple with AVP changes the topic to market penetration and profit, and with gigantic effort that only they can invest, may push it through and move this product from awkward to desirable for the public. AVP must not be a success as a product to make this the next iPhone moment.

That turned mobile communication, a technology that could be available for $100 and maybe even without charger (microcontrollers, solar panel, eInk display) to an area of entertainment market with billions of fragile but beautiful glass slabs every year that already replace / overflow our eyes and memory, each for $1000 (fake figures, just the magnitude). And green if you exclude Ghana and alike.
https://en.wikipedia.org/wiki/Agbogbloshie

AVP makes the Hitchhikers' Peril Sensitive Sunglasses or "the blind lead the blind" phrase so real. (sorry for venting)


Larry Rosenthal Reading your comments I thought you should refer to Postman - and there you are! I see the beauty of this lecture to the Apple developers in 1993. Quote: "Television should be the last technology we will allow to have been invented and promoted mindlessly"
https://www.youtube.com/watch?v=QqxgCoHv_aE&t=5285s

I started talking to computers (coding) at 12, now I am 51 with a whole life doing that. I see a crucial moment when this ("my"!) industry wants to strap a screen on the face of people, completely isolating them from the reality (yet acknowledging that we live in a world in which they have reasons to prefer that).

But I also admire the Apollo program and the lesson they learned when in a go-fever they burned the Apollo-1 crew during a test, worded perfectly by Gene Kranz.
https://www.youtube.com/watch?v=9zjAteaK9lM

Building a technological civilization in general, and altering the human perception of the world on individual and community level in particular, is also "terribly unforgiving of carelessness, incapacity and neglect". I know that my words have no weight, but for whatever this counts, here they are:

Dammit, stop.


Mitch Turnbull

Thank you for this discussion and John Hanke for initiating it. But how to put the genie back in the bottle?


Lorand Kedves

Mitch Turnbull My 2 cents: we don't.

Informatics is more like the old story of Pandora's box. We were not careful and have all the misery out in the world but we have to open it again to find the hope. I went back to the University after 20 some years in the industry and via my research I finally met the "founding fathers of informatics" who saw all this coming. You find a short summary here from 2018 (now trying to create some videos in my spare time but that's not my comfort zone for sure)
https://mondoaurora.org/TheScienceOfBeingWrong_KAIS.pdf

For motivation, look at Douglas Engelbart, his goals, achievements and modesty (and the date!). I did my research, his results are massive, today's informatics only scratches the surface hunting for profit. Imagine if we start listening to people like him instead of current "icons".
https://www.youtube.com/watch?v=PjWhQiwJzKg


Larry Rosenthal

Lorand Kedves the good news is sometimes, eventually, we do. Today the smog in the air, the smoke in restaurants, are all mostly gone in western cities. Smoking was as common as driving leaded fuel cars that got 8 miles to the gallon. Sometimes actions in society change. Sometimes it takes a civil war to change an action as well.


Lorand Kedves

Larry Rosenthal Does this mean I wasted too much time taking seriously those "existential threat orgs" like the Cambridge University or the MIT or the UN? 🙂 Or thinking that actions without understanding like that of Edward Snowden (From Russia with Love) or Aaron Swartz (no joke here, RIP) may not help?
https://www.youtube.com/watch?v=9vz06QO3UkQ

I think you are right on MIPS. But I have been payed for clear thinking in rough situations and still here (with some more or less managed psychosomatic issues). My conclusion is that mankind needs a paradigm shift. The definition of that state is that there is no other option. (... and it is not a screen strapped on our faces showing the Brave New World - another Postman ref... 🙂 )
https://neilpostman.org/



Larry Rosenthal


Lorand Kedves ironically i didnt know of postman much in 93... i knew mcluhan much more.. as for his quote from 93.. maybe he got it from me.;) “ I’ve seen the future of the Metaverse and it looks like 1980's TV “ ,,, this was all part fo tHUNK! the digital network which we began in 92;) published as early MAC diskettes.;) BUT Postman, McLuhan and Chayefsky should all be mandatory learning today. but its probably too late. sigh.. i also lamented nback then that i never got to make real spaceships as i did in my college thesis, since by the time i graduated in 85 the worlds money was now stopped from going to reality and all investment was in the virtual of the PC or movie. So i made lots of tv and video game spaceships instead from 85-95. since i had to eat.;)


Larry Rosenthal


Lorand Kedves we do need a paradigm shift,. but certainly the stanford/ mit/ eff folks were not the people who we should have allowed to make the previous one.;) aaron died for their sins.


Lorand Kedves


Larry Rosenthal I think I found a more constructive approach.

Institutions are by definition bound to the system and the current paradigm, thus work against any real shift. That can only come from "insane" individuals, as logical thinking based on a new paradigm is nonsense looking from the old one. In older words, "though this be madness, yet there is a method in it". In areas like physics you are lucky because you can use an equipment to show that your theory works.
https://en.wikipedia.org/wiki/Leo_Szilard#Columbia_University

But in informatics, you work against human nature, quoting Postman:
As Huxley remarked in Brave New World Revisited, the civil libertarians and rationalists who are ever on the alert to oppose tyranny “failed to take into account man’s almost infinite appetite for distractions”.
https://neilpostman.org/

Aaron lived a meaningful life chasing values beyond those of a "consumer society". He did not know enough about the past, substituted this gap with faith in people and communities. He did not lose against sins of individuals but roles they played.

On the other hand, I did my research, know that don't say much new but still against the current understanding. Here it starts.
https://youtu.be/u-TFazXf_RU


Larry Rosenthal


Lorand Kedves the adults in the room failed him. Simple as that. They have failed most children who came after them. Many of those children have finally awakened. Most are not self blaming, they are getting very angry. Machines may hold man at bey longer than man alone, but soon they to age and fail.


Lorand Kedves


Larry Rosenthal Maybe. I prefer clear heads. We'll see.


Lorand Kedves


Larry Rosenthal ... and adequately TRAINED (by the same institutions you blame because there is no better alternative).

In IT, you can't quote Bob Martin enough:
"... if we are doubling every five years, then we always have half of the programmers less than five years of experience, which leaves our industry in a state of perpetual inexperience... the new people coming in must repeat the mistakes made by everyone else over and over and over and over, and there seems to be no cure for this..."
https://youtu.be/ecIWPzGEbFc?t=3092s

Before saying so what, IT is a changing field, ask yourself if you think a commercial pilot or a brain surgeon is "experienced" after 5 years. With 30+ behind my back here, I can safely say: hard core IT is beyond reading the marketing materials of the latest tools and languages.

Informatics is literally brain surgery on civilization level. No wonder that it fails with the current bazaar attitude.



Larry Rosenthal


i'll stick with the arts and sciences vs technology and engineering being written on the colleges buildings entranceways.


Lorand Kedves


Larry Rosenthal That's why I so much respect Postman, an arts expert who could precisely analyze and predict the humane consequences of technology and engineering. To me, both other questions around "what" (products, services, stories) are equally important: "why" (arts and science) and "how" (technology and engineering). I agree, one should be expert in one - but be aware of and respect the other.


Lorand Kedves


Larry Rosenthal "... i never got to make real spaceships as i did in my college thesis, since by the time i graduated in 85 the worlds money was now stopped from going to reality and all investment was in the virtual of the PC or movie. So i made lots of tv and video game spaceships instead from 85-95. since i had to eat.;)"

I missed this important comment... thank you!

I think I had more luck. I met with the Tao Te Ching and started programming my first computer, a Commodore 116 around the same time at age 12. I got CS BSc in 1994 but only met the founding fathers like Engelbart and critics like Postman after I returned to the academy at age 43 (CS MSc, half PhD). I had the privilege to spend 25+ years working on (and with) what I dare to call AI (far from the popular "state of the art"). Of course not on the surface but behind any paying jobs until I hit the glass ceiling, got fired, started again elsewhere. The money was just enough to raise three sons, not more.

“Luck is what happens when preparation meets opportunity...” (correction, not Seneca) as I started this lecture 10 years ago. It aged well, like the picture at "I see the storm coming" is the Maidan Square riot, Ukraine, 2013.
https://mondoaurora.org/TasteOfLuck.pdf


Larry Rosenthal


Lorand Kedves life is luck and timing. All the way back to a few amino acids in the goo.



Lorand Kedves


Larry Rosenthal Agree, life is that.

Civilization is another story. This is a most tricky form of the behavioral sink, when accumulated knowledge and tools free individuals from the constant struggle for survival. It moves the focus to communities (tribes, nations, ideologies, see also Dawkins and the meme theory) to the ultimate global level. Now the threat is the collapse of knowledge transfer under the power of the very technology invented to support it.

Can't quote JCR Licklider enough (1964)
„... the "system" of man's development and use of knowledge is regenerative. If a strong effort is made to improve that system, then the early results will facilitate subsequent phases of the effort, and so on, progressively, in an exponential crescendo. On the other hand, if intellectual processes and their technological bases are neglected, then goals that could have been achieved will remain remote, and proponents of their achievement will find it difficult to disprove charges of irresponsibility and autism.”

Better to realize the importance of individual responsibility as part of the education, not under the threat of death

... or missing the message even then...

2023. szeptember 4., hétfő

"Technical Dimensions of Programming Systems"

 lkedves

Hi Jonathan,

Not having too much free time, I only skimmed over your article, and peeked into the site, nice one! I see a lot of similarity in the background, but I had an issue that you probably don’t hear too often. I think your abstraction is not deep enough. Although a generation younger, I have been building information systems from requirement negotiation to deployment and maintenance in the past 25+ years with a different core vision.

We cooperate with and via information systems that themselves are cooperating networks of various modules. This cooperation means learning and changing the state of the system through their modules and that goes from copying files, deploying existing modules, to changing their configuration or behavior – that is what we call “programming”. Thus, the “programming system” is just another module responsible to interact with the internals of a module, including itself of course. In this context, the ultimate module is the runtime that allows all the other modules work and cooperate in a particular environment: programming language and ecosystem over an abstract runtime or an operating system.

This approach allowed me to create a framework (the runtime and what I think you call a “programming system”) that I needed to implement other target systems (sometimes in hybrid environments). Testing it with your questionnaires:
Self-sustainability, 1: Add items runtime – yes. 2: Programs generate and execute programs – partial (as much I needed it). 3: Persistence – yes. 4: Reprogram low level – partial (metadata yes, runtime: code generation and build yes, “save algorithm as code” no). 5: Change GUI – yes.
Notational diversity, 1 Multiple syntaxes – yes (if you mean multiple programming languages and persistence methods). 2 GUI over text – yes. 3 View as tree – NO! It’s a free graph without such limitations. 4 Freeform – yes.

This was back in 2018, since then I learned that 1: the hard questions come after you have a working system on this level, and 2: a working prototype is an excellent cause to be rejected. This is too heavy for a pet-project, so I abandoned the main research and use bits and pieces in paying jobs (at this moment, creating a hand-coded knowledge base over the European XBRL filings for academic research). However, your research overlaps my playground, so here is another message in the bottle… 🙂

2023. augusztus 30., szerda

X - style answers about computing


Some answers/notes still Twitter (X) compatible in size (280 chars) but not with the audience...
  1. 0:15 Fast search engines: Now try to imagine the power consumption of those hundreds of thousands of servers and ask yourself what is 'fast enough to find cats' or is the constant growth 'sustainable'? 
  2. 1:10 AI replacing computing jobs: If you do something of which the result and evaluation is transparent to a search engine and done by hundreds of people (that is, translate a request to search terms and copy/paste the answer into another box), a search engine can do a better job than you. Bonus hint: use the newest languages to solve the same tasks, that will keep you ahead of the curve...
  3. 2:34 How chips work: The answer depends on how much you know already (but the more you know the less you would likely ask this question). See Feynman's answer to "how magnets work?" and 4. 
  4. 4:00 Coding vs Computer Science at the university: If anyone can cut with a knife, why surgeons learn for decades? To ensure that the person on the other end of the knife gets better and not worse. See also, 2: to get better than a search engine based language model. See also, 3. 
  5. 5:00 How zeros and ones turn into the internet? See 3. 
  6. 6:19 Why binary? If you learned IT history you know that binary was not decided or voted for or "supposed to be faster" but turned out to be the best. See, 4. 
  7. 7:22 Why restart always works: Beyond the correct answer, it does not fit to systems that must "always work" from your car to power plants or medical equipment. You should pray that the systems your life depends on were written by properly trained experts. See also, 4. 
  8. 8:00 What is the best OS? Bob Barton once called systems programmers "High priests of a low cult" and pointed out that "computing should be in the School of Religion" (ca 1966). Note from Alan Kay, see also, 6. 
  9. 9:17 Computers not getting cheaper??? "I hold in my hand the economic output of the world in 1950 and we produce them in lots of hundreds of millions..." Bob Martin, The future of Programming lecture, 2016. See also, 6. 
  10. 10:05 Cloud computing: Some big companies need huge server farms to fulfil requirements in peak periods (like Amazon at Christmas) and found a way to profit on them at idle (~90%) time. It's win-win, as long as you are OK with network latency, security questions and eventual lost connection. 
  11. 10:33 How does computer memory work? See 3. 
  12. 11:47 How do you explain web3? As the ownership over computing performance (speed, memory, bandwidth) got dirt cheap, more and more people got involved in "content making". Is this good? Not really, we rather Amusing Ourselves to Death. See Neil Postman and 4. 
  13. 13:34 Difference between firmware and software? You have hardware components to execute a specific function in your ecosystem by providing their services over an interface to you, other software, or hardware. The firmware is the software that makes the hardware work according to that interface.

2022. szeptember 1., csütörtök

Answer to "Why no-code is uninteresting"

Jonathan Edwards - Why no-code is uninteresting

Why no-code is uninteresting in 1 tweet. No game-changing inventions. Just design tradeoffs between generality & simplicity looking for market fit. The long tail of software makes that hard. Monetization entails self-defeating customer lock-in. Move along, nothing to see here.



Lorand Kedves
AUGUST 31, 2022 AT 9:35 PM

Hello Jonathan,

maybe it’s just me but I can replace “no-code” with “code” in this statement and works the same way. You complain about the general state of the software industry, not about a specific method.

But how about this?
Code is uninteresting because its core is data access and control structures: sequence, iteration, selection – which are, also not interestingly, the way to process an array, set and map, respectively. The rest is an overcomplicated struggle with modularization by those who failed to understand the difference between state machines and the Turing machine and can’t see them in a real information system. Or, more likely, don’t even understand the previous sentence.

If you are interested in my take on programming, take a look here. It was 10 years ago, but no regret.
https://github.com/MondoAurora/DustFramework/wiki/What-is-wrong-with-programming

2022. augusztus 13., szombat

That's my secret, monkeys. I'm always angry...

Regarding this article, The Problems with AI Go Way Beyond Sentience


2022.08.09.

Dear Noah,


I read your article that on the surface talks from my heart except for the optimistic conclusion related to academy and community. In my experience, this does not work that way. For example,

Those who refer to the Turing test do not seem to care about its definition, even when the clues are highlighted on the very first page...



I also asked the OpenAI folks about sentience when they had an open forum back in 2016. And yes, I offered an objective definition with levels as follows:

Knocking on Heaven's Door :-D

At OpenAI gym.

May 14 08:31

I would ask you a silly question: what is your definition of "intelligence"? No need to give links to AI levels or algorithms, I have been on the field for 20 years. I mean "intelligence", without the artificial part, "A" is the second question after defining "I". At least to me :-)

May 14 21:47

@JKCooper2 @yankov The popcorn is a good idea, I tend to write too much, trying to stay short.

@daly @gdb First question: what do we examine? The actions (black box model) or the structure (white box)?

If it's about actions (like playing go or passing Turing test), intelligence is about "motivated interaction" with a specific environment (and: an inspector who can understand this motivation!). In this way even a safety valve is "intelligent" because it has a motivation and controls a system: it is "able to accomplish a goal". Or a brake control system in a vehicle, a workflow engine or a rule based expert system.

However, white box approach: how it works is more promising. At least it enforces cleaning foggy terms like "learn", "quicker", or how we should deal with "knowledge representation", especially if we want to extract or share it.

In this way, I have starter levels like:

  • direct programmed reactions to input by a fixed algorithm;
  • validates inputs and self states, may react differently to the same input.

So far it's fine with typing code. But you need tricky architecture to continue:

  • adapts to the environment by changing the parameters of its own components;
  • adapts by changing its configuration (initiating, reorganizing, removing worker components).

So far it's okay, my framework can handle such things. However, the interesting parts come here:

  • monitors and evaluates its own operation (decisions, optimization);
  • adapts by changing its operation (writes own code);
  • adapts by changing its goals (what does "goal" mean to a machine?)

At least, for me artificial intelligence is not about the code that a human writes, but an architecture that later can change itself - and then a way of "coding" that can change itself. I did not see things related to this layer (perhaps I was too shallow), this is why I asked.

May 16 06:10

@gdb Okay, it seems that my short QnA does not worth serious attention here. I have quite long experience with cognitive dissonance, so just a short closing note.

Do you know the Tower of Babel story, how God stopped us to reach the sky? He gave us multiple languages so that we could not cooperate anymore. With OpenHI ;-) this story may resemble the myriads of programming languages, libraries and tools - for the same, relatively small set of tasks, being here for decades. (I have been designing systems and programming for decades to get the pain of it - see Bret Victor for more.)

So my point here: Artificial intelligence is not about algorithms, python codes, libraries, wrappers, etc. that YOU write and talk about. All that is temporal. (And by the way, AI is NOT for replacing human adults, like Einstein, Gandhi, Neumann or Buddha. It is only better than us today: dreaming children playing with a gun. hmm... lots of guns.) However...

When you start looking at your best codes like they should have been generated. When you have an environment that holds a significant portion of what you know about programming. When it generates part of its own source code from that knowledge to run (and you can kill it by a bad idea). When you realize that your current understanding is actually the result of using this thing, and that you can't follow what it is doing because you have a human brain, even though you wrote every single line of code. Because its ability is not the code, but the architecture you can build but can't keep in your brain and use it as fast and perfect as a machine.

By the way, you actually create a mind map to organize your own mind! How about a mind map that does what you put in there? An interactive mind map that you use to learn what you need to create an interactive mind map? Not a master-slave relationship, but cooperation with an equal partner with really different abilities. I think this is when you STARTED working on AI, because... "Hey! I'm no one's messenger boy. All right? I'm a delivery boy." (Shrek)

Sorry for being an ogre. Have fun!


Since then I learned that with this mindset, you can pass the exams of a CS PhD, but you can't publish an article, the head of your doctoral school "does not see the scientific value of this research", you don't get response from other universities like Brown (ask Andy van Dam and Steve Reiss) or research groups, etc.

So, I do it alone, because I am an engineer with respect to real science, even though I have not found a single "real" scientist to talk with. Yet.

Best luck to you!

  Lorand


2022.08.11.


[Response from Noah - private]


2022.08.12.

Hello Noah,


Thanks for the response to the message in the bottle. Before going on, a bit of context.

I used to be a software engineer, as long as this term had any connection with its original definition from Margaret Hamilton. Today I am "Solution Architect" at one of the last and largest "real" software company. You know, that gets its revenue from creating information systems, not mass manipulation (aka marketing), ecosystem monopoly etc. (Google, Apple, Facebook, Amazon, Microsoft, ... you name it).

When I started working on AI in a startup company, we wrote the algorithms (clustering, decision tree building and execution, neural nets etc.) from the math papers in C++, on computers that would not "run" a coffee machine today. The guy facing me wrote the 3D engine from Carmack's publications; in spare time he wrote a Wolfenstein engine in C and C++ to see how smart the C++ compiler is. I am still proud of that he though I was weird. Besides leading, I wrote the OLAP data cube manager for time series analysis, a true multithreaded job manager, and the underlying component manager infrastructure, the Basket, later learned that it was an IoC container, the only meaningful element of the "cloud". I was 25.

I saw the rise and fall of many programming languages and frameworks, while I had to do the same thing all the time in every environment: knowledge representation and assisted interaction, because that is the definition of all information system if you are able to see abstraction under the surface. I followed the intellectual collapse of IT population (and the human civilization by the way), fought against both as hard as I could. Lost. Went back to the university at 43 to check my intelligence in an objective environment. Got MSc while being architect / lead developer at a startup company, then another working for the government. Stayed for PhD because I thought what else should be a PhD thesis if not mine? I had 20 minutes one-on-one with really the top Emeritus Professor of model based software engineering, a virtual pat on the shoulder from Noah Chomsky (yes, that Chomsky), a hollow notion of interest from Andy van Dam, a kick in the butt from Ted Nelson (if you are serious about text management, you must learn his work), etc., etc., etc. In the meantime, I looked for communities as well, like published the actual research on Medium, chatting on forums like LinkedIn, RG, ... Epic fail, they think science is like TED lectures and Morgan Freeman in the movies... and oh yes, the Big Bang Theory. :D

Experience is what you get when you don't get what you wanted. (Randy Pausch, Last Lecture) I learned that this is the nature of any fundamental research and there is no reason to be angry with the gravity. The Science of Being Wrong is not a formal proof of that, but with the referred "founding fathers", a solid explanation. Good enough for me. Side note: of course, you can't publish a scientific article that among others states that the current "science industry" is the very thing information science was aimed to avoid before it destroys the civilization. See also, the life and death of Aaron Swartz. Yes, I mean it.


Back to the conversation.

If anyone carefully reads the Turing article instead of "yea yea I know", finds the following statements (and only these!) 

  1. We don't have a scientific definition of intelligence. 
  2. We tend to define intelligence as something we think it is intelligent because it behaves somewhat like us. 
  3. The machines will eventually have performance enough to fulfil this role. 

If you also happen to know about the work and warnings of Joseph Weizenbaum (the builder of the ELIZA chatbot) and Neil Postman (the "human factor" expert), then you will not waste a single second of your life on nn-based chatbots, whatever fancy name they have. I certainly do not do that, although understand how fantastic business and PR opportunity this is. For me this is science and not the Mythbusters show where you break all the plates in the kitchen to "verify" gravity (and make excellent sales opportunity for the dishware companies).


You also wrote that "Instead of talking in circles about how to use the word “sentience” (which no one seems to be able to define)"

I repeat: I have this definition with multiple levels quoted in the part you "skimmed". And use these levels as target milestones while building running information systems in real life environments. For the same reason, I stopped trying to write about it because nobody puts the effort to read what I write (general problem), I write the code instead. A code that I can see one day generate itself completely (partial self-generation in multiple languages for interacting multi-platform systems is done). You find a partially obsolete intro here - GitHub, etc. also available from there.

So, thank you for the support, but I am not frustrated about academy, I understood how it works, cows don't fly. The painful part is understanding that they never did, it's just self marketing. I am kind of afraid of losing my job again right now, but that's part of the game as I play it.

Best,

  Lorand


2022.08.13

FYI, this is where "your kind" abandons the dialog all the time, lets it sink under the guano of 21th century "communication". Been there, done that all the time, no problem. So just one closing note while I am interested in typing it in.

At least I hope you realize: a chatbot will never generate the previous message. I am not pretending intelligence by pseudo-randomly select some of the trillions of black box rules collected by adapting to the average of the global mass. I am intelligent because I create my rules, test and improve by using them, keep what works and learn form what does not. Another constructive definition and if you think about it, the direct opposite of a chatbot or the whole "emerging" tech-marvel-cargo-cult.

We both know "infinite mass of monkeys in infinite time will surely type in the Hamlet". But please consider that this is not the way the first one was created, and none of the monkeys will be able to tell the next Hamlet from the infinite garbage. Similarly, I may have a nonzero chance to create a conscious information system, even if I do it as a public project on GitHub, it will die with me because nobody will be able to see it. Btw, this is a valid conclusion of Turing's article (and the reason why Vannevar Bush wrote the As We May Think article and initiated the computer era).

Namaste :-)

2022. július 20., szerda

"you'll always be inferior"


It's hard to give a good answer to a bad question. Learning means you realise that you made a mistake. That you were wrong. That you missed the point. This is the meaning of the word. Real learning must feel bad. 

The thing that feels good is edu-tainment, the real danger identified by people who knew how this works (see https://neilpostman.org/ ). However, today you find edu-tainment everywhere because that has a business model - but no education that you would need to become a "knowledge worker". 

You don't feel that you are inferior developer but almost surely, you don't even know what it used to be. Give this guy 5 minutes to explain, and listen carefully. https://youtu.be/ecIWPzGEbFc?t=3056 If you feel weird, that means you at least have a tiny chance to start learning someday.

"Why do my eyes hurt? You've never used them before." (The Matrix)

---

[Of course, deleted immediately - I don't know if YouTube AI or the author, you never know that.]

2021. december 8., szerda

"Things I Wish I Knew When I Started Programming"


In short: you are right.

To add a few more words. 
I have been payed for coding, designing architectures and analysing problems for 25+ years. After 20+, I went back to the university to check if I am wrong or all the others - may call it "an inverse impostor syndrome". I got MSc, went on with PhD and during that time, I finally met with the fundamentals that explained why my mantras worked along the years. 
Like "structure eats code" means that we actually write information systems to understand a situation, learn to write a capable interactive model - or in academic terms, transfer our current knowledge to a Turing Machine only to extract a network of clean state machines, the process is called "refactor". 
Or: "I want to make my mistakes, not other people's mistakes" means the fundamental statement of information science: information is the noise. The things we did not know about the system. And we are there to make and understand different noise because from that knew understanding emerges. 
Etc, etc. Information science is an absolute gem, but it takes a lot of experience to understand... 

To be more specific and add Google compatible keywords. 
Understand the design patterns, they will work in any environment. Take the original Gang of Four materials. 
Check "Tracer Bullet Development" and other core ideas from The Pragmatic Programmer. That works. 
Listen to old guys, they did the same thing before the current "big names" appeared. Recommended: Bob Martin, Alan Kay. 
And if you think you know enough, you can start learning about the heavy stuff. Start with Bret Victor's Future of Programming lecture, https://youtu.be/8pTEmbeENF4 Then learn about those who he mentions: Sutherland, Engelbart, Licklider, Bush. They are the real deal.


The comment magically disappeared.

2019. október 22., kedd

Informatika - álom és valóság


Hű, a végén az a mutatóujj eltalált... 12 évesen kezdtem programozni, ugyanakkor vált kedvenc olvasmányommá a Tao te king, most 46 vagyok, egész életemet az informatika világában töltöttem. Gyurinak az az egyetlen tanácsom, hogy tanuljon.

Gondoljon bele abba, hány hozzá hasonló srác dolgozik a Föld "boldogabb felén" pont most, pont ugyanezzel a céllal és lelkesedéssel. Közülük a szerencse fogja kiválasztani azt, aki nyer és a többiek példaképe lesz, nem a tudás, mert az még elég harmatos. Óriási becsapás a garázscég-mítosz, nem hiába a legnagyobb marketingesek (mint Steve Jobs) teremtették. Ők pont ilyen nyertesek, és ügyesen eldugták a valódi kutatókat és felfedezőket, akik munkáját szerencsés nyertesként globális üzletté tették.

Tudok Gyurinak mutatni jópár olyan figurát, akiktől én, harminc év tapasztalattal a hátam mögött (vezető fejlesztő, tervező voltam és vagyok mesterséges intelligencia kutatásban, országos közigazgatási rendszereken, telekom környezetben, ...) dobtam akkora hátast, hogy visszhangzott. Persze nem velük kezdeném... És ehhez kellett az, hogy megvolt a BSc 1994-ben, húsz év munka, aztán 2013-ban visszamentem az egyetemre, MSc majd PhD (ezt most kellett feladnom).

Az informatika az emberi civilizáció legfontosabb, leginkább félreértett és emiatt legveszélyesebb kísérlete, ami jelenleg egy globális bazár környezetében folyik, valahogy így:


És aki ebben "csodagyerekként", lelkesen segíteni szeretne, az meghal. A többség persze "csak" lélekben (lásd "Szász Marci, az ország csodagyereke" vagy Rátai Dániel a Leonar3Do-val), de van aki szó szerint, ahogy a Reddit alapítója, az RSS szabvány tizenéves egyik tervezője, stb. a huszonhat évesen öngyilkosságba kergetett Aaron Swartz.


Az én tudásom és tapasztalatom szerint el kell döntenie: vagy üzletet csinál, vagy informatikus lesz. Mindkettő teljes embert követel és ellentétes elvárásokat támaszt. Ahhoz hogy informatikusként valóban értékes legyen, az életkora és lelkesedése megfelel, de már most nagyon sok időt vesztegetett arra, hogyan adja el a kezdődő tehetségét amikor alig bújt ki a föld alól.

Nem vagyok marketinges, tudnia kell hogy sok babér az informatikus pályán nem terem. A kedvenc viccem a témában:

A király sétál az udvarmesterével a palotában, amikor egy rongyos alakra lesz figyelmes.
 - Hogy kerül egy csavargó a váramba? Azonnal dobják ki innen!
 - Felség, ő az ön csillagásza.
 - Mi a fenét csinál a csillagász? Álmodozik? Horoszkópot készít a hölgyeknek?
 - Nos, a csillagok állásából felrajzolja a térképeket a kereskedőknek. Naptárakat készít a földműveseknek. És sok más hasonlót művel, lényegében felséged birodalmának működése ezen az emberen múlik. Nem kéne kidobni.
 - De akkor hogy nézhet ki így? Valami idegbeteg? Mennyit fizetünk neki?
 - Egy aranyat egy évben, azt gyerekére költi. Ruhát a kukából szerez, nem érdekli, ennivalóért horoszkópokat készít.
 - Ez szégyen! A legutolsó cseléd egy hét alatt többet keres az én udvaromban! Adj neki ezer aranyat, rendes ruhákat, ...
 - Bocsásson meg, azt nem lehet, felség.
 - Miért nem?
 - Mert ha annyit adnánk neki, amennyit érdemel, akkor holnap reggelre valaki megmérgezné, és a szakács fia lenne a királyi csillagász. Horoszkópot ő is tud rajzolni, a palotában senki nem  látna változást, a birodalom viszont lassan tönkremenne...

Sajnos ez megtörtént, Gyurinak pedig választania kell. Nekem húsz év kellett, amíg a "régi csillagászok műveit" a bazárban kapálózás és túlélés közben megtaláltam. Ha elfogad tőlem egy iránytűt, itt találja: http://bit.ly/montru_ScienceWrong


A szöveget az indító YouTube videó alatt válaszként írtam, de "kimoderálták", gondolom nem elég lelkesítő. Mindegy, ide átgépeltem, elfér.

2016. december 3., szombat

"You can't be a computer scientist if you can't code."

On LinkedIn

True.

But thinking that you are a computer scientist just because you can code is the real danger!

Because then nothing stops you from writing working but useless spaghetti, and make your share in (just for example) the 2 billion lines of code that makes Google "the greatest software company"... (or what? ah, yes, the patent portfolio covered by that guano... and sorry, Alphabet of course) :-D

Apart from sarcasm...

If the language does not count anymore, and you feel you can do anything on any platform that the other party can explain and is realistic in that environment, because you have done it several times.
... and if you are able to step back if there are better alternatives than you.
... and if you also understand the theoretical and practical background, that weird stuff from Turing, Neumann, Chomsky, Tanenbaum, Simonyi, the PARC, etc. Not on the "passed the exam level", but behind each and every decision you make and line of code you type.
... and if you also know the environment (the global human civilization), and realize that you work on the very backbone of it.

Then you may think you are a computer scientist, because THAT is the "scientist" part of the statement. Otherwise, you are a coder - though perhaps a brilliant one.

My 2 cents.

2016. október 21., péntek

Engineering and Scientific Creativity

5 Things Everyone Should Know About Machine Learning And AI

I wanted to talk about 2 things everyone must understand about creativity before that.


Noa Zamstein
I guess to sharpen the issue, is the ability to express humor only a matter of how much data you have been exposed to over the years, or - if to be a bit philosophical - is there this extra "something" that for some reason cannot be just a mere extrapolation of learned lessons? Why are some people astute or funny and can blurt out this brilliant concoction of ideas from all walks of life whose sum is a witty conclusion that we all nod to and can understand but would never think about saying at the right moment in the right context?


Noa Zamstein Why do you want to teach computers something that human beings seems to have lost while working with them? Human and computer intelligence is just like flesh and bones: not to be mixed or replaced with each other, it's not a race but can be a symbiosis. Otherwise, we die. Not because AI would kill us - but because we are not wise enough to live with our power. Just look around. We don't have time for things like "analyzing humor"... we right now are a lethal tumor. ;-) (to sharpen the issue)


Nikola Ivanov, PMP
Lorand Kedves I think you make a good point, but I would dispute that humanity has lost creativity because they are working with computers. The act of building computers themselves and associated software and applications is a creative process. Computers are another medium for expressing ideas and feelings. Just look at all digital art and entertainment, blogs, etc.


Nikola Ivanov, I also know the marketing stuff, but coding and creativity are different animals (and I have spent the last 20+ years with solving "impossible" design and coding tasks).

"In science if you know what you are doing you should not be doing it. In engineering if you do not know what you are doing you should not be doing it." (Richard Hamming (2005) The Art of Doing Science and Engineering)

To me, creativity is science, but IT became business, and business loves engineering, not science. Do you really know that all the "new" inventions like the internet, OOP, tablet, ... came from ARPA (which then became DARPA) or PARC around 1970? The past decades were fantastic in engineering(!): reducing the size, consumption and increasing capacity and speed that made those old inventions physically possible.
Sure, Jobs, Gates, Zuckerberg, Musk, etc. should be considered "creative" - sorry, that place is occupied by Lovelace, Neumann, Turing, Shannon, Hamilton or Charles Simonyi - but it takes a lot of time and effort to understand what they did (and today we are too busy chasing profit and fame, have no time to learn and understand them)...
I did watch the growth of the digital art, etc., as an IT expert and thinker - but all I see is quantity and marketing beating quality.

Sorry to be this short and rigid. I have written hundreds of pages about this, and I see no reason in trying to dispute, I am hopelessly bad at it. I rather recommend reading Technopoly (Neil Postman, 1992!), Civilization of the Spectacle (Mario Vargas Llosa), or watch this brilliant lecture from Bret Victor: http://worrydream.com/dbx/ to get a glimpse of what I try to talk about.


Nikola Ivanov, PMP
Lorand Kedves I get your point and will check out Technopoly. It appears that you are dividing creativity into two separate categories by tying it either to science or to "marketing" and "profit." I can see how the two types of creativity can be different, but to argue that creativity just does not exist anymore is false. Sure, there is a lot of junk our there that people pay money for, but there is also a lot of innovative, creative, clever, elegant, and useful ideas and products.


Nikola Ivanov, I think we have no argument here: I wrote "human beings seem to have lost". I mean: not enough for keeping our civilization alive: a 1m jump to cross a 2m gap equals to zero considering the result... ;-)

My original point was that we try to push fundamentally human values to machines (from humor to ethics), while we measure humans more quantitative and less qualitative ways. Naturally, because quantitative improvements are easier to plan, so this method is dominant in a business oriented environment. Qualitative, unplanned, wild changes (that is "creativity" in my world) "should be done by someone else". (the "20% free time at creative companies" is actually mind farming, not much more)

See also: “Never invest in a business you cannot understand.” (Warren Buffet) - sorry, then who will pay ANYONE trying to bring up new ideas (and naturally fails constantly for years)? Or the Candle problem: https://en.wikipedia.org/wiki/Candle_problem Or a favorite joke of mine (sorry for the weak translation):

The king spotted a shabby guy in his palace, and asked his counselor
- Who is this beggar?
- Your astronomer, sire.
- What does he do?
- He calculates the routes of your ships, the timing in farming, etc. Your empire depends on him.
- But why does he look like that?
- We pay them 5 pounds.
- No way! Give him 100!
- Sire, this is the only way to have a REAL astronomer... If we gave the royal astronomer 100 pounds, he would soon be replaced with that stupid son-in-law of your treasurer...
:-)


Nikola Ivanov, to make the division clearer.

There is creativity in finding the best possible answer to a complex, yet unanswered question. That is engineering and it has great importance. This is what we should thank our technological improvements in the past few decades.

And there is the creativity in finding a truly important question among the myriads of possible questions. That is science, and requires fundamentally different approach along the whole process.

We know a lot about how to support engineering creativity - but scientific creativity seems to be out of sight, thanks to its direct opposition to what we call "economy" or "society" today.

An example: engineering creativity is looking for an answer how to handle the garbage continents in the oceans. Scientific creativity asks: "why the f**k do we CREATE garbage???" And stops, because this is a much better and fundamental question, the job is done, the rest is engineering. Business guys say "this question cannot be asked", tap the head and kick science out of the way of making profit. Science leaves by saying "Have a nice funeral, guys. Don't forget the fireworks..."

Good morning Vietnam! :-D

2016. május 16., hétfő

Knocking on Heaven's Door :-D

At OpenAI gym.

May 14 08:31
I would ask you a silly question: what is your definition of "intelligence"? No need to give links to AI levels or algorithms, I have been on the field for 20 years. I mean "intelligence", without the artificial part, "A" is the second question after defining "I". At least to me :-)

May 14 21:47
@JKCooper2 @yankov The popcorn is a good idea, I tend to write too much, trying to stay short.

@daly @gdb First question: what do we examine? The actions (black box model) or the structure (white box)?

If it's about actions (like playing go or passing Turing test), intelligence is about "motivated interaction" with a specific environment (and: an inspector who can understand this motivation!). In this way even a safety valve is "intelligent" because it has a motivation and controls a system: it is "able to accomplish a goal". Or a brake control system in a vehicle, a workflow engine or a rule based expert system.

However, white box approach: how it works is more promising. At least it enforces cleaning foggy terms like "learn", "quicker", or how we should deal with "knowledge representation", especially if we want to extract or share it.

In this way, I have starter levels like:
  • direct programmed reactions to input by a fixed algorithm;
  • validates inputs and self states, may react differently to the same input.
So far it's fine with typing code. But you need tricky architecture to continue:
  • adapts to the environment by changing the parameters of its own components;
  • adapts by changing its configuration (initiating, reorganizing, removing worker components).
So far it's okay, my framework can handle such things. However, the interesting parts come here:
  • monitors and evaluates its own operation (decisions, optimization);
  • adapts by changing its operation (writes own code);
  • adapts by changing its goals (what does "goal" mean to a machine?)
At least, for me artificial intelligence is not about the code that a human writes, but an architecture that later can change itself - and then a way of "coding" that can change itself. I did not see things related to this layer (perhaps I was too shallow), this is why I asked.

May 16 06:10
@gdb Okay, it seems that my short QnA does not worth serious attention here. I have quite long experience with cognitive dissonance, so just a short closing note.

Do you know the Tower of Babel story, how God stopped us to reach the sky? He gave us multiple languages so that we could not cooperate anymore. With OpenHI ;-) this story may resemble the myriads of programming languages, libraries and tools - for the same, relatively small set of tasks, being here for decades. (I have been designing systems and programming for decades to get the pain of it - see Bret Victor for more.)

So my point here: Artificial intelligence is not about algorithms, python codes, libraries, wrappers, etc. that YOU write and talk about. All that is temporal. (And by the way, AI is NOT for replacing human adults, like Einstein, Gandhi, Neumann or Buddha. It is only better than us today: dreaming children playing with a gun. hmm... lots of guns.) However...

When you start looking at your best codes like they should have been generated. When you have an environment that holds a significant portion of what you know about programming. When it generates part of its own source code from that knowledge to run (and you can kill it by a bad idea). When you realize that your current understanding is actually the result of using this thing, and that you can't follow what it is doing because you have a human brain, even though you wrote every single line of code. Because its ability is not the code, but the architecture you can build but can't keep in your brain and use it as fast and perfect as a machine.

By the way, you actually create a mind map to organize your own mind! How about a mind map that does what you put in there? An interactive mind map that you use to learn what you need to create an interactive mind map? Not a master-slave relationship, but cooperation with an equal partner with really different abilities. I think this is when you STARTED working on AI, because... "Hey! I'm no one's messenger boy. All right? I'm a delivery boy." (Shrek)

Sorry for being an ogre. Have fun!

2016. március 13., vasárnap

AI Go Live :-)

LinkedIn - Dave Aron: Does a Computer Beating The World Go Champion Matter?

The short answer is: yes, very much. Firstly, it is a kind of a benchmark as to how far artificial intelligence is along. Go is a very difficult game, and a game of perfect information – there is no luck involved. Secondly, DeepMind have pushed some of the boundaries and techniques in intelligent decision making and optimization.

But third, and most intriguing to me, is that I believe in the future, we may solve tough real world problems by encoding them as game positions.

Do you know about The Treachery of Images?

This is not a pipe.


This is exactly the same: there is a fundamental difference between Go and Life.

In Go you know all the rules, and play in an isolated environment.

In Life, you don't: none of the above preconditions apply!

This is what the total history of science is about: realizing that we were wrong, find new rules (a bit deeper understanding of the fundamental laws of nature), and step forward as a civilization by using them. And then realize where we were wrong, and do it again.

So showing that now we have enough performance to run a massive algorithm that finds, evaluates and optimizes billions of actions among known rules better than a human player is, well, kinda nice, and yes, it is important because now you can calculate the trajectory of a rocket or satellite with more precision than legions of human computers with pencil and paper.



But to solve (and create...) problems, it is still and always us. Human beings.

However, to solve the problems that WE create, we must return to the level we named our kind about: Homo "Sapiens". Wise, not intelligent. And here a game playing toaster does not help much, only distracts us from our own tasks and responsibility.  

An important tool, but definitely not the key to the future.

2016. február 15., hétfő

Writing Source Code is Evil

The aim of our industry is to produce software. That is: listen to our clients' requests and create a system that can does what they wanted, on their hardware (be it a global company IT infrastructure, a smart watch or a thermostat).

This sentence is a typical mission statement:
sounds nice, easy to repeat, and totally useless.

What they “wanted” is mostly unclear, we only know what they have told us. No, we only know what we have understood from what they have told us. “Does it or not?” is a yes/no question. In practice, we can only have a measurement that gives a hopefully objective ratio “how much” our system meets the requirements. We don't just want to create working, but good software. We want to compare different approaches and solutions objectively.

So, we need to measure the goodness, or fitness of our software.

Measurement theory

Fortunately, we do have a sound scientific methodology that can reliably guide us towards a better understanding and judgment, can estimate the “goodness” of our current understanding, and help us further refining it. To explain the title, we will only need the fundamental definitions.

System

In very rough terms, system can be any part of the world that 1: we separated from the rest, 2: interacts with its environment, and 3: changes during this interaction.

Naturally, any software is a system by this definition. However, the actual form of our software is thousands, sometimes millions of lines of source code, configuration, database content, collection of external libraries: gigantic amount of “content” in unclear, but existing relation to goodness. We need to simplify it.

Modeling

We never know “the truth”, but we don't even have to.

Drinking a glass of water is possible only by having the molecules, their atoms and their subatomic particles of my body, the glass and the water in it to follow a proper arrangement for the action; and “the truth” may even be under those levels. However, we have a good enough model of our body, the glass and the water, and we are able to control the process adequately.

Good models give us reliable control – bad models increase uncertainty.

If we know how good our models are, we can estimate the precision of our actions, or add safety mechanisms to our process and achieve a reliable operation in a less reliable environment. Standing is different in a room, in a moving bus or after some drinks, but we can manage such scenarios in a quite wide range by adaptation.

Black box models

These models describe the system from the external perspective only: we can see the environment, the input given to the system and its response. With enough tests, we can say that we know what the system does in certain conditions, and we can build reliable operation on it. However, the same system can give us surprise in untested conditions. The cooling system of our cars works normally, but can also cripple the complete engine in case the coolant freezes. Our black box model can only tell that it works now, but it is safer to look into the box.

White box models

White or glass box models are those that we see through: we know all the internal components and their interaction, so above knowing what it does, we exactly know how and why it does so. Of course, the “absolute white box” is the complete truth itself, which we don't know. The elements inside our white box models are also models, but we assume knowing enough about them to calculate their behavior. We do know about the constant subatomic buzz, but it will statistically never affect the behavior of our car engine. Unlike the fluid levels that we must check regularly.

Analysis

In practice, our models are not completely black or white, but this short introduction to modeling gives a clear statement: the “darker” our models, the less we should rely on their findings.

With a complete black box we can only have a checklist, we can grow them and repeat testing many times. But without peeking into the box, we can't even know what critical checks we have forgotten about. With a complete white box, we only have to check that the required elements, their parameters and connections are there, because if they are, then the system should work as expected. If it does not, the model gives us clues what parts can cause the problem and what checks are needed to locate it. It can also point at the critical elements that should be regularly checked to make sure the model is adequate.

The consequences are clear: it is definitely worth creating whiter models, and the opposite: constant struggle of otherwise motivated and adequately trained participants is likely caused by black box models somewhere in the way.

The development process

Negotiation

To simplify the scenario, our clients asks a question (“can you do what I want?”), and we give them an answer.

In general, we are in competition with others and need the money to continue the operation, so we want to offer more and/or ask for less than the others; while the clients want a working system in the long run but also want to pay less now. Furthermore, adequately refining the question and giving a reliable, honest answer needs a lot of effort and contains serious risk, especially if the competitors' offers are more “optimistic”.

Perhaps it would be too hard to say that within natural business environment our answer will always be the words the clients want to hear, but connected smartly enough to give us backdoors to extend the deadline and price in case we really have to do the job (in short: a lie, or a bit longer: professional sales material). But it would also be too optimistic to assume having a thorough requirement and architectural analysis before making the deal.

Design

Of course, there are programmers and companies who win a contract because they are ahead of all others by far. They start coding using agile methods, their solution will evolve to perfection naturally and don't need any more philosophy around it. May the Force be with them, but the rest of the world is not so sure of that, so they need objective measurement of goodness, and consequentially, some modeling.

Initially, we receive a requirement specification. That is, by definition, a black box model: we don't know the internal structures, don't even know what we don't know about the clients' problems or how to solve it. In order to get a better view, we need to make our models whiter.

But what should we model: the problem or the solution? We have to deal with both, and the two seems to be important alone. Especially, if we want to prepare for later changes from the clients, it is important to clearly see the interaction between the new requirements and our solution. By creating separate task and solution model, the connection must have a clean interface, thus the effects of changes will also be more transparent. Fortunately, we have separate roles for these tasks.

The system analyst deals with the problem model, talks the language of the clients, is an expert in the problem domain. Together with the clients, they refine the requirements, make a white box model of business domain entities, their responsibilities, interaction, and finally, check if the specified data model can behave as it was required. The software architect works with the solution model: software components, tools, databases, deployment environment. Together with the clients' IT experts, they design the actual system components, data tables, connections to the external systems. This is an iterative process, the actual environment may set needs or limitations to the original requirement, which can be improved and adapted to them.

Finally, we have two "enough white" boxes that show how we solve the task, can be objectively evaluated against the requirements, or adapted to later changes.

Implementation

And now...

we take our white box models and wrap them into black boxes again!

We write source codes on various languages, and create configurations to components that were written in various languages again. The source code has no defined connection to the structures that we have designed (coding standards and design patterns don't help much), there is no straightforward way to validate the result.

Each measurement is as good as its worst segment. Returning to a black box model means the complete, so far controlled development process is almost as bad as doing it all with no planning. In practice, the expensive design phase seems to be a waste of money and resources in meaningless discussions and brainstorming. Some of the lucky startups may overtake the highly organized companies even in the quality of their products (perhaps because they do the job at their desks that the big ones outsourced to cheaper but less motivated guys).

Houston, we've had a problem...

Solution

If this problem is so obvious, why don't we have a solution already? In fact, we do, on many levels.

Sedatives

Of course, we have tons of static and dynamic code analyzers, unit testing, integration testing, continuous deployment tools, etc. This does not help in the fundamental problem: we put our solution into a black box, and all quality measurements are just external checklists without understanding the internal structure.

The tools that peek into the program structures by detecting the graphs show useless complexity, gaining useful information from such a report is on the same magnitude as asking a professional to write it again.

Those that try to enforce connection between design and implementation like UML tools are very rarely used because they limit the freedom of both the designers and programmers without providing a clean process model for the transformation.

Considering all this, the current failure / delay rate should not be a surprise. Even if we gracefully forget about the 99% of startups that die without notice.

The Good Old Days

Decades ago, when there were much less programmers and much weaker hardware, we could not afford wasting so much resource on writing code, so for example, user interfaces were created by resource editors. There were much less hype around user experience, but those systems were quite usable.

Today

We have a renaissance of declarative user interfaces (Apple never left it, Microsoft, Google, Mozilla and many other vendors create their own user interface configuration tools). Application structure is also going out to configuration, these are the cloud based solutions, Inversion of Control Containers, etc.

There are also many systems that allow their users or admins to configure their operations. Security systems are configurable in many environments; we have workflow engines, task and project management tools, etc. All they can do can also be done by typing some lines of source code, but that is unsafe, it is better to have some visual editor to click and drag together the process in a guarded and validated environment.

What is common in these tools? They all bring the white box model back to the implementation level!

Is it a problem to solve at all?

Our economy is moved by the needs.

Education system is happy that we “need” legions of programmers who should type source codes to the always changing and ever newest platforms, “solving” the same problems again and again. This looks good in marketing campaign and political slogans, keeps the business running. Solving this issue would change the IT world, and is not compatible with any of our current business models.

Even if we could create the technology, we are not ready for its consequences.

Summary

  1. Software became fundamental element of the human civilization, its quality is critical;
  2. We can check and improve quality only if we can measure it;
  3. Measuring depends on the models that we create;
  4. Reliable checking, efficient problem finding, estimating the side effects of changes only available with white/glass box models, while black box models only allow guessing by experience;
  5. The requirements are black box models, that we can make white by expert problem and solution modeling, but traditional implementation puts it back to a black box again;
  6. Advanced computing technologies have one thing in common: they allow white box modeling in implementation.

So...

If you want to build a reliable system, write less code!

And use libraries that also follow this approach, because the quality and flexibility of your system is related to the same attributes of your weakest component.