lkedves
Ha elég messzire jutottál, a megoldás a hátad mögött van... :-)
Respektu Tempon. Tiszteld az Időt / Az Időt tiszteld.
International readers, please use the english tag to get a first impression, thank you.
2023. szeptember 4., hétfő
2023. augusztus 30., szerda
X - style answers about computing
- 0:15 Fast search engines: Now try to imagine the power consumption of those hundreds of thousands of servers and ask yourself what is 'fast enough to find cats' or is the constant growth 'sustainable'?
- 1:10 AI replacing computing jobs: If you do something of which the result and evaluation is transparent to a search engine and done by hundreds of people (that is, translate a request to search terms and copy/paste the answer into another box), a search engine can do a better job than you. Bonus hint: use the newest languages to solve the same tasks, that will keep you ahead of the curve...
- 2:34 How chips work: The answer depends on how much you know already (but the more you know the less you would likely ask this question). See Feynman's answer to "how magnets work?" and 4.
- 4:00 Coding vs Computer Science at the university: If anyone can cut with a knife, why surgeons learn for decades? To ensure that the person on the other end of the knife gets better and not worse. See also, 2: to get better than a search engine based language model. See also, 3.
- 5:00 How zeros and ones turn into the internet? See 3.
- 6:19 Why binary? If you learned IT history you know that binary was not decided or voted for or "supposed to be faster" but turned out to be the best. See, 4.
- 7:22 Why restart always works: Beyond the correct answer, it does not fit to systems that must "always work" from your car to power plants or medical equipment. You should pray that the systems your life depends on were written by properly trained experts. See also, 4.
- 8:00 What is the best OS? Bob Barton once called systems programmers "High priests of a low cult" and pointed out that "computing should be in the School of Religion" (ca 1966). Note from Alan Kay, see also, 6.
- 9:17 Computers not getting cheaper??? "I hold in my hand the economic output of the world in 1950 and we produce them in lots of hundreds of millions..." Bob Martin, The future of Programming lecture, 2016. See also, 6.
- 10:05 Cloud computing: Some big companies need huge server farms to fulfil requirements in peak periods (like Amazon at Christmas) and found a way to profit on them at idle (~90%) time. It's win-win, as long as you are OK with network latency, security questions and eventual lost connection.
- 10:33 How does computer memory work? See 3.
- 11:47 How do you explain web3? As the ownership over computing performance (speed, memory, bandwidth) got dirt cheap, more and more people got involved in "content making". Is this good? Not really, we rather Amusing Ourselves to Death. See Neil Postman and 4.
- 13:34 Difference between firmware and software? You have hardware components to execute a specific function in your ecosystem by providing their services over an interface to you, other software, or hardware. The firmware is the software that makes the hardware work according to that interface.
2023. június 21., szerda
VR - oktatás
Mariann Papp
Felelmetes lehetosegek… plusz meg hozzatesszuk a filmeket, amiket a fiatalok neznek es hollywood-bol lassan mar csak a fantasy filmeket kapjuk, marvel universe es hasonlok, egy olyan generacio no fel akik valo elettol teljesen eltavolodnak. Talan meg remenykedhetunk az oktatas erejeben 🤔
Kedves Mariann Papp, "reménykedni az oktatás erejében"?
Sajnos ezt az óhajt nem a magyar valósággal szembeni kontraszt (lásd még, "tanítanék") teszi igazán ironikussá, hanem a tény, hogy az oktatás a technológiával szembeni harcot egy generációval korábban, a Sesame Street szintjén veszítette el. A "tudományos csatornák" és "oktató portálok" által közvetített tudás-illúzió, majd az erre épülő újabb AI/VR hype (az előzőnél én is ott voltam) csak következmény.
A megalapozatlan reménykedéssel szembeni egyetlen alternatíva a valódi szakértők megismerése, akik első (népszerűtlen) dolga az ábrándok elkergetése. Az "új világban" oktatással kapcsolatban Neil Postman olyan, mint Newton a fizikában, hazai környezetben dr. Aczél Petra jó kiindulási alap. Informatikában tanításról "uncle" Bob Martin előadásába érdemes belenézni: https://youtu.be/ecIWPzGEbFc?t=3057
Hasonló módon, Marvel és társai legalább nyíltan gyermetegek, a legnagyobb kárt a felvilágosítónak, lelkesítőnek szánt filmekbe szőtt hazugságok okozzák, mint az Apollo 13, a Good Will Hunting, az Interstellar vagy az Ex Machina. Arról, hogy mennyire "újak" az emlegetett problémák, íme egy filmrészlet 1968-ból. Ki nevet ma ezeken a válaszokon? https://youtu.be/Nb6shvId_XI?t=215
2022. szeptember 1., csütörtök
Answer to "Why no-code is uninteresting"
Why no-code is uninteresting in 1 tweet. No game-changing inventions. Just design tradeoffs between generality & simplicity looking for market fit. The long tail of software makes that hard. Monetization entails self-defeating customer lock-in. Move along, nothing to see here.
Lorand Kedves
AUGUST 31, 2022 AT 9:35 PM
Hello Jonathan,
maybe it’s just me but I can replace “no-code” with “code” in this statement and works the same way. You complain about the general state of the software industry, not about a specific method.
But how about this?
Code is uninteresting because its core is data access and control structures: sequence, iteration, selection – which are, also not interestingly, the way to process an array, set and map, respectively. The rest is an overcomplicated struggle with modularization by those who failed to understand the difference between state machines and the Turing machine and can’t see them in a real information system. Or, more likely, don’t even understand the previous sentence.
If you are interested in my take on programming, take a look here. It was 10 years ago, but no regret.
https://github.com/MondoAurora/DustFramework/wiki/What-is-wrong-with-programming
2022. augusztus 30., kedd
Key to Space
Hi Scott! As always, a fantastic starter of my day, but let me add two comments.
Regarding the second, of course there's some progress in that sort of direction, we have therapies like ECMO and Dialysis etc. However, to get a brain-in-a-jar means replacing all of the life support services that the body provides with artificial surrogates - including a prothetic immune system, blood cells or advanced blood surrogate, various hormones, detoxification etc. etc.. It's a very large set of problems, and the solutions need to weigh less than a human body, be at least equally resilient and fault-tolerant, cost-effective, non-traumatic (!), and as versatile as a living crew (the ability to do the space equivalent of 'get out and change the tires' when something goes wrong is very valuable).
Lorand Kedves12 hours ago (edited)
@daviga1 You don't seem to get the point. Have you ever seen a rocket and wondered how inefficient machine it is? See Artemis 1: MASS AT LIFTOFF — 5,750,000 pounds / PAYLOAD TO THE MOON — 59,000 pounds (copied from NASA). 99% of the mass is there only to lift itself and the 1% useful part at the top.
daviga14 hours ago
@Lorand Kedves “Nothing can stop the man with the right mental attitude from achieving his goal; nothing on earth can help the man with the wrong mental attitude.” - Thomas Jefferson
2022. augusztus 13., szombat
That's my secret, monkeys. I'm always angry...
Regarding this article, The Problems with AI Go Way Beyond Sentience
2022.08.09.
Dear Noah,
I read your article that on the surface talks from my heart except for the optimistic conclusion related to academy and community. In my experience, this does not work that way. For example,
Those who refer to the Turing test do not seem to care about its definition, even when the clues are highlighted on the very first page...
I also asked the OpenAI folks about sentience when they had an open forum back in 2016. And yes, I offered an objective definition with levels as follows:
At OpenAI gym.
May 14 08:31
I would ask you a silly question: what is your definition of "intelligence"? No need to give links to AI levels or algorithms, I have been on the field for 20 years. I mean "intelligence", without the artificial part, "A" is the second question after defining "I". At least to me :-)
May 14 21:47
@JKCooper2 @yankov The popcorn is a good idea, I tend to write too much, trying to stay short.
@daly @gdb First question: what do we examine? The actions (black box model) or the structure (white box)?
If it's about actions (like playing go or passing Turing test), intelligence is about "motivated interaction" with a specific environment (and: an inspector who can understand this motivation!). In this way even a safety valve is "intelligent" because it has a motivation and controls a system: it is "able to accomplish a goal". Or a brake control system in a vehicle, a workflow engine or a rule based expert system.
However, white box approach: how it works is more promising. At least it enforces cleaning foggy terms like "learn", "quicker", or how we should deal with "knowledge representation", especially if we want to extract or share it.
In this way, I have starter levels like:
- direct programmed reactions to input by a fixed algorithm;
- validates inputs and self states, may react differently to the same input.
So far it's fine with typing code. But you need tricky architecture to continue:
- adapts to the environment by changing the parameters of its own components;
- adapts by changing its configuration (initiating, reorganizing, removing worker components).
So far it's okay, my framework can handle such things. However, the interesting parts come here:
- monitors and evaluates its own operation (decisions, optimization);
- adapts by changing its operation (writes own code);
- adapts by changing its goals (what does "goal" mean to a machine?)
At least, for me artificial intelligence is not about the code that a human writes, but an architecture that later can change itself - and then a way of "coding" that can change itself. I did not see things related to this layer (perhaps I was too shallow), this is why I asked.
May 16 06:10
@gdb Okay, it seems that my short QnA does not worth serious attention here. I have quite long experience with cognitive dissonance, so just a short closing note.
Do you know the Tower of Babel story, how God stopped us to reach the sky? He gave us multiple languages so that we could not cooperate anymore. With OpenHI ;-) this story may resemble the myriads of programming languages, libraries and tools - for the same, relatively small set of tasks, being here for decades. (I have been designing systems and programming for decades to get the pain of it - see Bret Victor for more.)
So my point here: Artificial intelligence is not about algorithms, python codes, libraries, wrappers, etc. that YOU write and talk about. All that is temporal. (And by the way, AI is NOT for replacing human adults, like Einstein, Gandhi, Neumann or Buddha. It is only better than us today: dreaming children playing with a gun. hmm... lots of guns.) However...
When you start looking at your best codes like they should have been generated. When you have an environment that holds a significant portion of what you know about programming. When it generates part of its own source code from that knowledge to run (and you can kill it by a bad idea). When you realize that your current understanding is actually the result of using this thing, and that you can't follow what it is doing because you have a human brain, even though you wrote every single line of code. Because its ability is not the code, but the architecture you can build but can't keep in your brain and use it as fast and perfect as a machine.
By the way, you actually create a mind map to organize your own mind! How about a mind map that does what you put in there? An interactive mind map that you use to learn what you need to create an interactive mind map? Not a master-slave relationship, but cooperation with an equal partner with really different abilities. I think this is when you STARTED working on AI, because... "Hey! I'm no one's messenger boy. All right? I'm a delivery boy." (Shrek)
Sorry for being an ogre. Have fun!
Since then I learned that with this mindset, you can pass the exams of a CS PhD, but you can't publish an article, the head of your doctoral school "does not see the scientific value of this research", you don't get response from other universities like Brown (ask Andy van Dam and Steve Reiss) or research groups, etc.
So, I do it alone, because I am an engineer with respect to real science, even though I have not found a single "real" scientist to talk with. Yet.
Best luck to you!
Lorand
2022.08.11.
2022.08.12.
Hello Noah,
Thanks for the response to the message in the bottle. Before going on, a bit of context.
I used to be a software engineer, as long as this term had any connection with its original definition from Margaret Hamilton. Today I am "Solution Architect" at one of the last and largest "real" software company. You know, that gets its revenue from creating information systems, not mass manipulation (aka marketing), ecosystem monopoly etc. (Google, Apple, Facebook, Amazon, Microsoft, ... you name it).
When I started working on AI in a startup company, we wrote the algorithms (clustering, decision tree building and execution, neural nets etc.) from the math papers in C++, on computers that would not "run" a coffee machine today. The guy facing me wrote the 3D engine from Carmack's publications; in spare time he wrote a Wolfenstein engine in C and C++ to see how smart the C++ compiler is. I am still proud of that he though I was weird. Besides leading, I wrote the OLAP data cube manager for time series analysis, a true multithreaded job manager, and the underlying component manager infrastructure, the Basket, later learned that it was an IoC container, the only meaningful element of the "cloud". I was 25.
I saw the rise and fall of many programming languages and frameworks, while I had to do the same thing all the time in every environment: knowledge representation and assisted interaction, because that is the definition of all information system if you are able to see abstraction under the surface. I followed the intellectual collapse of IT population (and the human civilization by the way), fought against both as hard as I could. Lost. Went back to the university at 43 to check my intelligence in an objective environment. Got MSc while being architect / lead developer at a startup company, then another working for the government. Stayed for PhD because I thought what else should be a PhD thesis if not mine? I had 20 minutes one-on-one with really the top Emeritus Professor of model based software engineering, a virtual pat on the shoulder from Noah Chomsky (yes, that Chomsky), a hollow notion of interest from Andy van Dam, a kick in the butt from Ted Nelson (if you are serious about text management, you must learn his work), etc., etc., etc. In the meantime, I looked for communities as well, like published the actual research on Medium, chatting on forums like LinkedIn, RG, ... Epic fail, they think science is like TED lectures and Morgan Freeman in the movies... and oh yes, the Big Bang Theory. :D
Experience is what you get when you don't get what you wanted. (Randy Pausch, Last Lecture) I learned that this is the nature of any fundamental research and there is no reason to be angry with the gravity. The Science of Being Wrong is not a formal proof of that, but with the referred "founding fathers", a solid explanation. Good enough for me. Side note: of course, you can't publish a scientific article that among others states that the current "science industry" is the very thing information science was aimed to avoid before it destroys the civilization. See also, the life and death of Aaron Swartz. Yes, I mean it.
Back to the conversation.
If anyone carefully reads the Turing article instead of "yea yea I know", finds the following statements (and only these!)
- We don't have a scientific definition of intelligence.
- We tend to define intelligence as something we think it is intelligent because it behaves somewhat like us.
- The machines will eventually have performance enough to fulfil this role.
If you also happen to know about the work and warnings of Joseph Weizenbaum (the builder of the ELIZA chatbot) and Neil Postman (the "human factor" expert), then you will not waste a single second of your life on nn-based chatbots, whatever fancy name they have. I certainly do not do that, although understand how fantastic business and PR opportunity this is. For me this is science and not the Mythbusters show where you break all the plates in the kitchen to "verify" gravity (and make excellent sales opportunity for the dishware companies).
You also wrote that "Instead of talking in circles about how to use the word “sentience” (which no one seems to be able to define)"
I repeat: I have this definition with multiple levels quoted in the part you "skimmed". And use these levels as target milestones while building running information systems in real life environments. For the same reason, I stopped trying to write about it because nobody puts the effort to read what I write (general problem), I write the code instead. A code that I can see one day generate itself completely (partial self-generation in multiple languages for interacting multi-platform systems is done). You find a partially obsolete intro here - GitHub, etc. also available from there.
So, thank you for the support, but I am not frustrated about academy, I understood how it works, cows don't fly. The painful part is understanding that they never did, it's just self marketing. I am kind of afraid of losing my job again right now, but that's part of the game as I play it.
Best,
Lorand
2022.08.13
FYI, this is where "your kind" abandons the dialog all the time, lets it sink under the guano of 21th century "communication". Been there, done that all the time, no problem. So just one closing note while I am interested in typing it in.
At least I hope you realize: a chatbot will never generate the previous message. I am not pretending intelligence by pseudo-randomly select some of the trillions of black box rules collected by adapting to the average of the global mass. I am intelligent because I create my rules, test and improve by using them, keep what works and learn form what does not. Another constructive definition and if you think about it, the direct opposite of a chatbot or the whole "emerging" tech-marvel-cargo-cult.
We both know "infinite mass of monkeys in infinite time will surely type in the Hamlet". But please consider that this is not the way the first one was created, and none of the monkeys will be able to tell the next Hamlet from the infinite garbage. Similarly, I may have a nonzero chance to create a conscious information system, even if I do it as a public project on GitHub, it will die with me because nobody will be able to see it. Btw, this is a valid conclusion of Turing's article (and the reason why Vannevar Bush wrote the As We May Think article and initiated the computer era).
Namaste :-)
2022. július 20., szerda
"you'll always be inferior"
---
[Of course, deleted immediately - I don't know if YouTube AI or the author, you never know that.]
Hi Jonathan,
Not having too much free time, I only skimmed over your article, and peeked into the site, nice one! I see a lot of similarity in the background, but I had an issue that you probably don’t hear too often. I think your abstraction is not deep enough. Although a generation younger, I have been building information systems from requirement negotiation to deployment and maintenance in the past 25+ years with a different core vision.
We cooperate with and via information systems that themselves are cooperating networks of various modules. This cooperation means learning and changing the state of the system through their modules and that goes from copying files, deploying existing modules, to changing their configuration or behavior – that is what we call “programming”. Thus, the “programming system” is just another module responsible to interact with the internals of a module, including itself of course. In this context, the ultimate module is the runtime that allows all the other modules work and cooperate in a particular environment: programming language and ecosystem over an abstract runtime or an operating system.
This approach allowed me to create a framework (the runtime and what I think you call a “programming system”) that I needed to implement other target systems (sometimes in hybrid environments). Testing it with your questionnaires:
Self-sustainability, 1: Add items runtime – yes. 2: Programs generate and execute programs – partial (as much I needed it). 3: Persistence – yes. 4: Reprogram low level – partial (metadata yes, runtime: code generation and build yes, “save algorithm as code” no). 5: Change GUI – yes.
Notational diversity, 1 Multiple syntaxes – yes (if you mean multiple programming languages and persistence methods). 2 GUI over text – yes. 3 View as tree – NO! It’s a free graph without such limitations. 4 Freeform – yes.
This was back in 2018, since then I learned that 1: the hard questions come after you have a working system on this level, and 2: a working prototype is an excellent cause to be rejected. This is too heavy for a pet-project, so I abandoned the main research and use bits and pieces in paying jobs (at this moment, creating a hand-coded knowledge base over the European XBRL filings for academic research). However, your research overlaps my playground, so here is another message in the bottle… 🙂