2023. szeptember 4., hétfő

"Technical Dimensions of Programming Systems"

 lkedves

Hi Jonathan,

Not having too much free time, I only skimmed over your article, and peeked into the site, nice one! I see a lot of similarity in the background, but I had an issue that you probably don’t hear too often. I think your abstraction is not deep enough. Although a generation younger, I have been building information systems from requirement negotiation to deployment and maintenance in the past 25+ years with a different core vision.

We cooperate with and via information systems that themselves are cooperating networks of various modules. This cooperation means learning and changing the state of the system through their modules and that goes from copying files, deploying existing modules, to changing their configuration or behavior – that is what we call “programming”. Thus, the “programming system” is just another module responsible to interact with the internals of a module, including itself of course. In this context, the ultimate module is the runtime that allows all the other modules work and cooperate in a particular environment: programming language and ecosystem over an abstract runtime or an operating system.

This approach allowed me to create a framework (the runtime and what I think you call a “programming system”) that I needed to implement other target systems (sometimes in hybrid environments). Testing it with your questionnaires:
Self-sustainability, 1: Add items runtime – yes. 2: Programs generate and execute programs – partial (as much I needed it). 3: Persistence – yes. 4: Reprogram low level – partial (metadata yes, runtime: code generation and build yes, “save algorithm as code” no). 5: Change GUI – yes.
Notational diversity, 1 Multiple syntaxes – yes (if you mean multiple programming languages and persistence methods). 2 GUI over text – yes. 3 View as tree – NO! It’s a free graph without such limitations. 4 Freeform – yes.

This was back in 2018, since then I learned that 1: the hard questions come after you have a working system on this level, and 2: a working prototype is an excellent cause to be rejected. This is too heavy for a pet-project, so I abandoned the main research and use bits and pieces in paying jobs (at this moment, creating a hand-coded knowledge base over the European XBRL filings for academic research). However, your research overlaps my playground, so here is another message in the bottle… 🙂

2023. augusztus 30., szerda

X - style answers about computing


Some answers/notes still Twitter (X) compatible in size (280 chars) but not with the audience...
  1. 0:15 Fast search engines: Now try to imagine the power consumption of those hundreds of thousands of servers and ask yourself what is 'fast enough to find cats' or is the constant growth 'sustainable'? 
  2. 1:10 AI replacing computing jobs: If you do something of which the result and evaluation is transparent to a search engine and done by hundreds of people (that is, translate a request to search terms and copy/paste the answer into another box), a search engine can do a better job than you. Bonus hint: use the newest languages to solve the same tasks, that will keep you ahead of the curve...
  3. 2:34 How chips work: The answer depends on how much you know already (but the more you know the less you would likely ask this question). See Feynman's answer to "how magnets work?" and 4. 
  4. 4:00 Coding vs Computer Science at the university: If anyone can cut with a knife, why surgeons learn for decades? To ensure that the person on the other end of the knife gets better and not worse. See also, 2: to get better than a search engine based language model. See also, 3. 
  5. 5:00 How zeros and ones turn into the internet? See 3. 
  6. 6:19 Why binary? If you learned IT history you know that binary was not decided or voted for or "supposed to be faster" but turned out to be the best. See, 4. 
  7. 7:22 Why restart always works: Beyond the correct answer, it does not fit to systems that must "always work" from your car to power plants or medical equipment. You should pray that the systems your life depends on were written by properly trained experts. See also, 4. 
  8. 8:00 What is the best OS? Bob Barton once called systems programmers "High priests of a low cult" and pointed out that "computing should be in the School of Religion" (ca 1966). Note from Alan Kay, see also, 6. 
  9. 9:17 Computers not getting cheaper??? "I hold in my hand the economic output of the world in 1950 and we produce them in lots of hundreds of millions..." Bob Martin, The future of Programming lecture, 2016. See also, 6. 
  10. 10:05 Cloud computing: Some big companies need huge server farms to fulfil requirements in peak periods (like Amazon at Christmas) and found a way to profit on them at idle (~90%) time. It's win-win, as long as you are OK with network latency, security questions and eventual lost connection. 
  11. 10:33 How does computer memory work? See 3. 
  12. 11:47 How do you explain web3? As the ownership over computing performance (speed, memory, bandwidth) got dirt cheap, more and more people got involved in "content making". Is this good? Not really, we rather Amusing Ourselves to Death. See Neil Postman and 4. 
  13. 13:34 Difference between firmware and software? You have hardware components to execute a specific function in your ecosystem by providing their services over an interface to you, other software, or hardware. The firmware is the software that makes the hardware work according to that interface.

2023. június 21., szerda

VR - oktatás



Mariann Papp
Felelmetes lehetosegek… plusz meg hozzatesszuk a filmeket, amiket a fiatalok neznek es hollywood-bol lassan mar csak a fantasy filmeket kapjuk, marvel universe es hasonlok, egy olyan generacio no fel akik valo elettol teljesen eltavolodnak. Talan meg remenykedhetunk az oktatas erejeben 🤔


Kedves Mariann Papp, "reménykedni az oktatás erejében"?

Sajnos ezt az óhajt nem a magyar valósággal szembeni kontraszt (lásd még, "tanítanék") teszi igazán ironikussá, hanem a tény, hogy az oktatás a technológiával szembeni harcot egy generációval korábban, a Sesame Street szintjén veszítette el. A "tudományos csatornák" és "oktató portálok" által közvetített tudás-illúzió, majd az erre épülő újabb AI/VR hype (az előzőnél én is ott voltam) csak következmény.

A megalapozatlan reménykedéssel szembeni egyetlen alternatíva a valódi szakértők megismerése, akik első (népszerűtlen) dolga az ábrándok elkergetése. Az "új világban" oktatással kapcsolatban Neil Postman olyan, mint Newton a fizikában, hazai környezetben dr. Aczél Petra jó kiindulási alap. Informatikában tanításról "uncle" Bob Martin előadásába érdemes belenézni: https://youtu.be/ecIWPzGEbFc?t=3057

Hasonló módon, Marvel és társai legalább nyíltan gyermetegek, a legnagyobb kárt a felvilágosítónak, lelkesítőnek szánt filmekbe szőtt hazugságok okozzák, mint az Apollo 13, a Good Will Hunting, az Interstellar vagy az Ex Machina. Arról, hogy mennyire "újak" az emlegetett problémák, íme egy filmrészlet 1968-ból. Ki nevet ma ezeken a válaszokon? https://youtu.be/Nb6shvId_XI?t=215

2022. szeptember 1., csütörtök

Answer to "Why no-code is uninteresting"

Jonathan Edwards - Why no-code is uninteresting

Why no-code is uninteresting in 1 tweet. No game-changing inventions. Just design tradeoffs between generality & simplicity looking for market fit. The long tail of software makes that hard. Monetization entails self-defeating customer lock-in. Move along, nothing to see here.



Lorand Kedves
AUGUST 31, 2022 AT 9:35 PM

Hello Jonathan,

maybe it’s just me but I can replace “no-code” with “code” in this statement and works the same way. You complain about the general state of the software industry, not about a specific method.

But how about this?
Code is uninteresting because its core is data access and control structures: sequence, iteration, selection – which are, also not interestingly, the way to process an array, set and map, respectively. The rest is an overcomplicated struggle with modularization by those who failed to understand the difference between state machines and the Turing machine and can’t see them in a real information system. Or, more likely, don’t even understand the previous sentence.

If you are interested in my take on programming, take a look here. It was 10 years ago, but no regret.
https://github.com/MondoAurora/DustFramework/wiki/What-is-wrong-with-programming

2022. augusztus 30., kedd

Key to Space




Hi Scott! As always, a fantastic starter of my day, but let me add two comments. 

First, I missed a reference to Robert Zubrin's totally serious Mars Direct concept from 1990/1996, based on then-existing hardware and contains the tethered spinning vehicle idea. As far as I know, they came up with a prototype of creating fuel on the Martian surface. I think you could create a great video about this. 
"During the trip, artificial gravity would be generated by tethering the Habitat Unit to the spent upper stage of the booster, and setting them rotating about a common axis. This rotation would produce a comfortable 1 g working environment for the astronauts, freeing them of the debilitating effects of long-term exposure to weightlessness." 

Second (my obsession): We will never stay in space with this mindset. The 0th law of space exploration is that we need a working human mind in time sync in extreme distances. Not a human body. In space, it is a dead weight (or more precisely, a continuously dying burden regardless of all the ridiculous amount of effort and cost). If you like, our body is an excellent Earth-suit for the mind. Without a magical synchronous communication device, we must start with extracted brains, kept alive and in the immediate communication loop with the local equipment and delayed contact with Mission Control. Is there any serious research in this direction?


daviga118 hours ago (edited)

Regarding the second, of course there's some progress in that sort of direction, we have therapies like ECMO and Dialysis etc. However, to get a brain-in-a-jar means replacing all of the life support services that the body provides with artificial surrogates - including a prothetic immune system, blood cells or advanced blood surrogate, various hormones, detoxification etc. etc.. It's a very large set of problems, and the solutions need to weigh less than a human body, be at least equally resilient and fault-tolerant, cost-effective, non-traumatic (!), and as versatile as a living crew (the ability to do the space equivalent of 'get out and change the tires' when something goes wrong is very valuable). 

Happens that working towards extreme transhumanist goals invariably makes all healthcare better, so I'm 100% for it


Lorand Kedves12 hours ago (edited)

@daviga1 You don't seem to get the point. Have you ever seen a rocket and wondered how inefficient machine it is? See Artemis 1: MASS AT LIFTOFF — 5,750,000 pounds / PAYLOAD TO THE MOON — 59,000 pounds (copied from NASA). 99% of the mass is there only to lift itself and the 1% useful part at the top. 

From space travel's aspect, the human body is the "mass at liftoff", the brain is the "payload". You need bones and muscles to move and get food, digestion, immune system, healing capacities, ... only survive alone (self sustaining) on Earth and in this ecosystem. In space, you only need to keep your brain that "operates you" in homeostasis. 
Some Wikipedia facts (I am not an expert on this field). The dura mater contains the cerebrospinal fluid that completely surrounds the central nervous system: your brain and the spinal cord. You have the blood-brain barrier that filters most of the stuff out of your blood, allowing the transportation of the absolutely necessary components. Your brain is already in a biological jar. 
You should learn to replace this jar with external, mechanical systems. Such systems already showed that they can go around the solar system and even out of it, without "changing tires". Humans? Made it to the moon 50 years ago a few times. Because it only takes days, and a little luck to not meet with solar storms. (Or issues with wiring that killed the crew of Apollo 1 during ground test, blew up 13 in flight, and most missions have their close calls...) 
I absolutely do not think that keeping a brain alive in a jar is simple, but at least, more manageable than doing the same with a whole, continuously dying and totally useless human body. You should compare the weight and complexity of the "life" support systems - like for a brain, you need a cubic meter filled with water and you got sufficient radiation protection as a bonus. For a crew? Huge volumes "just to move around" with pressurized air or O2, with ventilation to avoid CO2 bubbles, food, water circulation, ... serious, error-prone etc. systems just to keep the bodies alive longer (but they are damaged regardless). You need humans to operate the systems that keep them alive? Does not sound too efficient... 

A hint. The key to space is energy management: not hopping over to another gravity well but go out of this one and stay outside, indefinitely. You even knew how to do it but it is not compatible with Star Wars and Marvel stories - so you forgot about that. Please, don't enlighten me, I also know this will not happen because people don't really want to leave Earth, just want to live in their dreams (or more precisely, most of them just admire the few who do and forget about how to live IRL). I just forgot to delete the previous comment. 

I am not a "transhumanist", I only have a properly trained and tested logical thinking. I simply don't care much about "healthcare" as long as you spend thousand times more on killing; "education" as long as you do everything to enslave; "communication" as long as you only want to remote control; etc. each other. Very inhuman and absolutely not popular attitude. But you do your best to prove that a technological civilization does not work without it. 
Sorry, I am not as good as Douglas Adams with sugarcoating, and TMI anyway. Good luck. You all will need it...


daviga14 hours ago

@Lorand Kedves “Nothing can stop the man with the right mental attitude from achieving his goal; nothing on earth can help the man with the wrong mental attitude.” - Thomas Jefferson


Lorand Kedves3 hours ago

@daviga1 To the "nothing can stop" part... 

"The Internet's Own Boy depicts the life of American computer programmer, writer, political organizer and Internet activist Aaron Swartz. It features interviews with his family and friends as well as the internet luminaries who worked with him. The film tells his story up to his eventual suicide after a legal battle, and explores the questions of access to information and civil liberties that drove his work." 

You find the film here on YouTube. The momentum of the "long tail" is a pretty strong adversary today and mental attitude is just part of the game. I would add a wink if Aaron survived but instead, Skol, brother!

2022. augusztus 13., szombat

That's my secret, monkeys. I'm always angry...

Regarding this article, The Problems with AI Go Way Beyond Sentience


2022.08.09.

Dear Noah,


I read your article that on the surface talks from my heart except for the optimistic conclusion related to academy and community. In my experience, this does not work that way. For example,

Those who refer to the Turing test do not seem to care about its definition, even when the clues are highlighted on the very first page...



I also asked the OpenAI folks about sentience when they had an open forum back in 2016. And yes, I offered an objective definition with levels as follows:

Knocking on Heaven's Door :-D

At OpenAI gym.

May 14 08:31

I would ask you a silly question: what is your definition of "intelligence"? No need to give links to AI levels or algorithms, I have been on the field for 20 years. I mean "intelligence", without the artificial part, "A" is the second question after defining "I". At least to me :-)

May 14 21:47

@JKCooper2 @yankov The popcorn is a good idea, I tend to write too much, trying to stay short.

@daly @gdb First question: what do we examine? The actions (black box model) or the structure (white box)?

If it's about actions (like playing go or passing Turing test), intelligence is about "motivated interaction" with a specific environment (and: an inspector who can understand this motivation!). In this way even a safety valve is "intelligent" because it has a motivation and controls a system: it is "able to accomplish a goal". Or a brake control system in a vehicle, a workflow engine or a rule based expert system.

However, white box approach: how it works is more promising. At least it enforces cleaning foggy terms like "learn", "quicker", or how we should deal with "knowledge representation", especially if we want to extract or share it.

In this way, I have starter levels like:

  • direct programmed reactions to input by a fixed algorithm;
  • validates inputs and self states, may react differently to the same input.

So far it's fine with typing code. But you need tricky architecture to continue:

  • adapts to the environment by changing the parameters of its own components;
  • adapts by changing its configuration (initiating, reorganizing, removing worker components).

So far it's okay, my framework can handle such things. However, the interesting parts come here:

  • monitors and evaluates its own operation (decisions, optimization);
  • adapts by changing its operation (writes own code);
  • adapts by changing its goals (what does "goal" mean to a machine?)

At least, for me artificial intelligence is not about the code that a human writes, but an architecture that later can change itself - and then a way of "coding" that can change itself. I did not see things related to this layer (perhaps I was too shallow), this is why I asked.

May 16 06:10

@gdb Okay, it seems that my short QnA does not worth serious attention here. I have quite long experience with cognitive dissonance, so just a short closing note.

Do you know the Tower of Babel story, how God stopped us to reach the sky? He gave us multiple languages so that we could not cooperate anymore. With OpenHI ;-) this story may resemble the myriads of programming languages, libraries and tools - for the same, relatively small set of tasks, being here for decades. (I have been designing systems and programming for decades to get the pain of it - see Bret Victor for more.)

So my point here: Artificial intelligence is not about algorithms, python codes, libraries, wrappers, etc. that YOU write and talk about. All that is temporal. (And by the way, AI is NOT for replacing human adults, like Einstein, Gandhi, Neumann or Buddha. It is only better than us today: dreaming children playing with a gun. hmm... lots of guns.) However...

When you start looking at your best codes like they should have been generated. When you have an environment that holds a significant portion of what you know about programming. When it generates part of its own source code from that knowledge to run (and you can kill it by a bad idea). When you realize that your current understanding is actually the result of using this thing, and that you can't follow what it is doing because you have a human brain, even though you wrote every single line of code. Because its ability is not the code, but the architecture you can build but can't keep in your brain and use it as fast and perfect as a machine.

By the way, you actually create a mind map to organize your own mind! How about a mind map that does what you put in there? An interactive mind map that you use to learn what you need to create an interactive mind map? Not a master-slave relationship, but cooperation with an equal partner with really different abilities. I think this is when you STARTED working on AI, because... "Hey! I'm no one's messenger boy. All right? I'm a delivery boy." (Shrek)

Sorry for being an ogre. Have fun!


Since then I learned that with this mindset, you can pass the exams of a CS PhD, but you can't publish an article, the head of your doctoral school "does not see the scientific value of this research", you don't get response from other universities like Brown (ask Andy van Dam and Steve Reiss) or research groups, etc.

So, I do it alone, because I am an engineer with respect to real science, even though I have not found a single "real" scientist to talk with. Yet.

Best luck to you!

  Lorand


2022.08.11.


[Response from Noah - private]


2022.08.12.

Hello Noah,


Thanks for the response to the message in the bottle. Before going on, a bit of context.

I used to be a software engineer, as long as this term had any connection with its original definition from Margaret Hamilton. Today I am "Solution Architect" at one of the last and largest "real" software company. You know, that gets its revenue from creating information systems, not mass manipulation (aka marketing), ecosystem monopoly etc. (Google, Apple, Facebook, Amazon, Microsoft, ... you name it).

When I started working on AI in a startup company, we wrote the algorithms (clustering, decision tree building and execution, neural nets etc.) from the math papers in C++, on computers that would not "run" a coffee machine today. The guy facing me wrote the 3D engine from Carmack's publications; in spare time he wrote a Wolfenstein engine in C and C++ to see how smart the C++ compiler is. I am still proud of that he though I was weird. Besides leading, I wrote the OLAP data cube manager for time series analysis, a true multithreaded job manager, and the underlying component manager infrastructure, the Basket, later learned that it was an IoC container, the only meaningful element of the "cloud". I was 25.

I saw the rise and fall of many programming languages and frameworks, while I had to do the same thing all the time in every environment: knowledge representation and assisted interaction, because that is the definition of all information system if you are able to see abstraction under the surface. I followed the intellectual collapse of IT population (and the human civilization by the way), fought against both as hard as I could. Lost. Went back to the university at 43 to check my intelligence in an objective environment. Got MSc while being architect / lead developer at a startup company, then another working for the government. Stayed for PhD because I thought what else should be a PhD thesis if not mine? I had 20 minutes one-on-one with really the top Emeritus Professor of model based software engineering, a virtual pat on the shoulder from Noah Chomsky (yes, that Chomsky), a hollow notion of interest from Andy van Dam, a kick in the butt from Ted Nelson (if you are serious about text management, you must learn his work), etc., etc., etc. In the meantime, I looked for communities as well, like published the actual research on Medium, chatting on forums like LinkedIn, RG, ... Epic fail, they think science is like TED lectures and Morgan Freeman in the movies... and oh yes, the Big Bang Theory. :D

Experience is what you get when you don't get what you wanted. (Randy Pausch, Last Lecture) I learned that this is the nature of any fundamental research and there is no reason to be angry with the gravity. The Science of Being Wrong is not a formal proof of that, but with the referred "founding fathers", a solid explanation. Good enough for me. Side note: of course, you can't publish a scientific article that among others states that the current "science industry" is the very thing information science was aimed to avoid before it destroys the civilization. See also, the life and death of Aaron Swartz. Yes, I mean it.


Back to the conversation.

If anyone carefully reads the Turing article instead of "yea yea I know", finds the following statements (and only these!) 

  1. We don't have a scientific definition of intelligence. 
  2. We tend to define intelligence as something we think it is intelligent because it behaves somewhat like us. 
  3. The machines will eventually have performance enough to fulfil this role. 

If you also happen to know about the work and warnings of Joseph Weizenbaum (the builder of the ELIZA chatbot) and Neil Postman (the "human factor" expert), then you will not waste a single second of your life on nn-based chatbots, whatever fancy name they have. I certainly do not do that, although understand how fantastic business and PR opportunity this is. For me this is science and not the Mythbusters show where you break all the plates in the kitchen to "verify" gravity (and make excellent sales opportunity for the dishware companies).


You also wrote that "Instead of talking in circles about how to use the word “sentience” (which no one seems to be able to define)"

I repeat: I have this definition with multiple levels quoted in the part you "skimmed". And use these levels as target milestones while building running information systems in real life environments. For the same reason, I stopped trying to write about it because nobody puts the effort to read what I write (general problem), I write the code instead. A code that I can see one day generate itself completely (partial self-generation in multiple languages for interacting multi-platform systems is done). You find a partially obsolete intro here - GitHub, etc. also available from there.

So, thank you for the support, but I am not frustrated about academy, I understood how it works, cows don't fly. The painful part is understanding that they never did, it's just self marketing. I am kind of afraid of losing my job again right now, but that's part of the game as I play it.

Best,

  Lorand


2022.08.13

FYI, this is where "your kind" abandons the dialog all the time, lets it sink under the guano of 21th century "communication". Been there, done that all the time, no problem. So just one closing note while I am interested in typing it in.

At least I hope you realize: a chatbot will never generate the previous message. I am not pretending intelligence by pseudo-randomly select some of the trillions of black box rules collected by adapting to the average of the global mass. I am intelligent because I create my rules, test and improve by using them, keep what works and learn form what does not. Another constructive definition and if you think about it, the direct opposite of a chatbot or the whole "emerging" tech-marvel-cargo-cult.

We both know "infinite mass of monkeys in infinite time will surely type in the Hamlet". But please consider that this is not the way the first one was created, and none of the monkeys will be able to tell the next Hamlet from the infinite garbage. Similarly, I may have a nonzero chance to create a conscious information system, even if I do it as a public project on GitHub, it will die with me because nobody will be able to see it. Btw, this is a valid conclusion of Turing's article (and the reason why Vannevar Bush wrote the As We May Think article and initiated the computer era).

Namaste :-)

2022. július 20., szerda

"you'll always be inferior"


It's hard to give a good answer to a bad question. Learning means you realise that you made a mistake. That you were wrong. That you missed the point. This is the meaning of the word. Real learning must feel bad. 

The thing that feels good is edu-tainment, the real danger identified by people who knew how this works (see https://neilpostman.org/ ). However, today you find edu-tainment everywhere because that has a business model - but no education that you would need to become a "knowledge worker". 

You don't feel that you are inferior developer but almost surely, you don't even know what it used to be. Give this guy 5 minutes to explain, and listen carefully. https://youtu.be/ecIWPzGEbFc?t=3056 If you feel weird, that means you at least have a tiny chance to start learning someday.

"Why do my eyes hurt? You've never used them before." (The Matrix)

---

[Of course, deleted immediately - I don't know if YouTube AI or the author, you never know that.]