--So, it''s my distinct pleasure and privilege to host Yuri for a talk on a topic I''ve discussed with him since I was even a student, models of computation and through the years we have interacted in various ways on both fundamentals of models of computation and then abstract state machine applications. At this point, it''s also probably say the last, very likely the last talk Yuri will give here as a resident as he''s moving to Michigan early next week. But, I have no doubt that there will be many opportunities to interact with Yuri also in the future. I hear that there''s a birthday party coming up in a couple of years and it was always a pleasure to participate in these events. So, without more ado, please go ahead.

--Thank you very much. It''s pleasure to return for a day. Me see how do we remove that. So, in 2008, Nachum Dershowitz, my very learned and scholarly friend and myself, we wrote this paper where we proved Church''s thesis and Turing thesis, but very technically there are slight differences in technicality for classical algorithms. Now, what what do you mean by classical algorithms? Now, quantum people hijack the word classical, it became to mean non-quantum. But, our meaning was classical in the sense, algorithms from time immemorial till certainly till 1950s,1960s. So, this is the algorithm that Church and Turing had in mind when they put forward their thesis. So, another name for this classical algorithms is traditional algorithms or sequential, I will call them mostly sequential. Because the first non-classical, non-traditional algorithms were parallel and so there was those traditional were non-sequential. Now, you can generalize and generalize and generalize and then it''s not completely obvious why it should be even true. So, I had this argument, implicit, published two years ago, but as you see here is a prompt for today''s talk. So, let me read it, "The Dershowitz- Gourevitch paper says nothing about probabilistic or quantum computation. It does write down a set of axioms about computation and prove the Church-Turing thesis assuming those axioms. However, we are left with justifying these axioms. Neither probabilistic nor quantum computation is covered by these axioms. They admit this, those criminals for probabilistic computation and do not mentioned quantum computation at all. So, it''s quite clear to me that these axioms actually falls in the real world. There it means for mathematical axioms to be false in the real world, even though the Church-Turing thesis is probably true." So, there''s a problem with scholarly papers. It takes time to read them, the whole scheme. So, he definitely didn''t read. He actually probably didn''t reading even the abstract where it says that this is about classical algorithms. Of course, he''s a great hero of quantum computing, but that''s how it is. So, let me say what is Church''s thesis?

--So, the tongue of the debate hasn''t changed for the 1930?

--Actually, this discussion was quite civil, it is sure. There was another picture here of Stephen Kleene. By the way, in which we said in Dutch, small hole Kleiner would be German. That''s the meaning of his name. So, he was a student of George. He actually made the Church thesis what it is, he straightened it, called it thesis, Church himself called it conjecture and spoke only about total algorithms and the better version is partial algorithms. So, if somebody interested in history and all the subtleties, I''ve sent you to our paper with Nachum Dershowitz, here I formulate a thesis. There are many forms you can formulate it, they all essentially equivalent. So, if you have an algorithm. So, none of them spoke what algorithm, they spoke about calculability, but there is something that make it calculable. So, this is algorithm. So, if an algorithm computes a partial function from strings to strings, then it is Turing computable. How important is the thesis? I personally think is not very. The importance mostly historical. But, that''s how people are, something historical seems to evoke a lot of interests. So, this were much more naive times and decidable seemed to be really decidable and undecidable through the undecidable. But, by now we know it doesn''t work that way. Okay. So, decidable problems may be practically undecidable. So, a good example is the first-order theory of real field, which proved decidable by Tversky in 1948, this is a wonderful result, except it confuses much. So, there are people here who tried. It would be a huge boon to software verification if there was a good procedure. Also undecidable problems maybe in reality decidable. Because he''s speaking about the worst-case. So, they may be decidable if there''s a good probability distribution on average, they can be typically decidable. So, the classical, the first undecidable problem was halting problem and there is a tool called terminator, which was developed at Masarra Cambridge, UK, which worked with C programs. You bring a C program and it tells you whether it will halt or not. Okay. It worked pretty good, pretty well. So, it probably can be improved, but the whole thing is decidability, it has a ridiculous assumption that we have unlimited resources. Where else in the world do you make such assumptions. No. Why physicists are interested is beyond me. Physicists are very strange things, they tend to make huge pronouncement whether our little reservations here and there, which needs to be made, but are not. Anyway, so Complexity Theory replaced all practical purposes. This decidable, undecidable thing. Complexity Theory has its own problem. It is asymptotic. But, that''s a different story. So, there was something much more real, software specifications. So, I came to computing in 1982, and I moved from mathematics to computer science. We wanted to understand what do they do there. So, I volunteered to teach programming, and my chairman said no. So, why not? He said, ''''We have enough people to teach programming, we don''t have anybody to teach theory.'''' Those were the times, not enough theorists in computer science. I felt, but if I always stick to theory, I will never learn much about computing. He thought a little and said, ''Okay, once. You''ll teach it once.'''' So, I taught essentially, Pascal, and then I discovered that I have my little Macintosh, and in US Michigan, had a huge mainframe. They actually competed with IBM. That was seven, eight universities around the globe that adopted this Michigan Terminal System. So, in anyway, as far as I''m concerned, there was just too compilers. One interpreter and compiler, for Pascal, and they disagreed. What''s Pascal? You''re mathematician, you teach something, but what it is, you don''t know. Nobody knows what Pascal is. There was no Kernel, there''s no definition. So, that arose my interest to all this things, especially those specifications. Then I discovered that everybody among the theorist likes this declarative approach. So, I knew this people like Dana Scott and others. I knew they''re not good programmers. So, how can they teach? There is this population of programmers, engineers, who actually brought all those positions that feed us, and now, you tell them, ''''No, no, no, you''re all doing wrong.'''' So, everything should be declarative, clean, high level. So, my impression is that executable specifications are the way to go, and I was actually laughed at by theorists.

--Here [inaudible].

--You can execute. So, there are companies like Zed. So, you can, in wider, you say, ''''Here is my software I''m doing", the experts come and take a few months and they write a book, and they bring your book. Now, nobody in your company can read this book. In the meantime, software is like organism, it doesn''t stay in place. It''s developed, whether this book was ever reflecting the right thing, we don''t know. It''s useful. Often it''s useful to read it, because you see their weight, how they understood and they are often very clever good people, but you cannot test across the book. So, what you want to have, you want to have a specification which you can run and compare.

--I''m just asking, what is the meaning of that? Is the program executable? What level of complexity is, do you call it executable?

--Just state, state evolves that they ended have none, just normal run, anything which runs.

--So, is the script operational?

--Operational, yeah. Operation is going to offer me synonyms. Okay. So, this is a famous V diagram of software engineering. So, you start from certain problem, you refine, refine, refine it, go down in levels of abstraction, then your code, and then you test. Component tests, subsystem tests, and so on, go up. Okay. Now, then suppose you discover say, this test is wrong. In the meantime, a lot of people did a lot of work, and you will discover this. So, the obvious idea, which it was around a lot was to test in this horizontal ways, on the same level of abstraction. So, in order to test for my point of view, all these should be algorithms. Should be operational, executable programs, high-level, maybe not very efficient, but nevertheless. So, I started to think how to approach it in general. There is this theoretical problem, can you have a more general Turing Machine? Of course, Turing Machine is ridiculously low-level, works in single bits. Yes, and one tape, and so on. But, can you have a very advanced, versatile, computation model, which is executed operational? It''s an algorithm, you can run it on any level of abstraction. So, you can simulate on any level abstraction, and abstract state machines were developed to meet this challenge. So, my first thought was now can possibly work. So, I said, ''''But nevertheless, let''s try.'''' So, first, I worked with sequential algorithms. In fact, the only algorithms I knew at that time. So, heaviest machines, I can''t explain them again, but let me go with my story. Unless somebody knows, they''ll be happy to explain what they are. So, there are some people who like the ideas of ASM community been formed here in Europe and other places. Then, we pushed it to partner-led algorithms, to interactive algorithms, to a number of classes, each time when things are well understood, not just to algorithms. So this is the general thesis from this paper. It''s some Oxford press which is not available. So, when I thought about this talk I went and thought or maybe I can want to give this talk. So, I made it in archive version. So, this is a more general thesis, that whenever you have a reasonably understood- okay, this is a thesis, reasonably understood class of species of algorithms, you can extend this sequential thesis to this class. Now, what are sequential algorithms? So, it''s probably what you think they are but here is a very influential book by Hartley Rogers who taught logic in MIT. This is theory of recursive functions, the book in the area. So, that''s how he defined them on page two of his book what he thinks informally algorithms are. So, in particular, they are deterministic, without any resort to random methods or devices. So, it''s not as we''re acknowledged that we don''t have probabilistic device in our algorithms. That''s by definition. So, there is often an impression that Turing captured sequential algorithms, and that''s not true. I''ll show you some very classical algorithms. Here is Euclidean algorithms. That''s the whole algorithm. But Euclid had two versions of it, one for natural numbers and another one for lengths. Or today we would say negative or non-negative, or positive real numbers. But they didn''t have the notion of real number, the had a notion of lengths. So, the version for numbers always converges. Version for length may not converge, it depends whether length A and B are commensurate. Only one of these two algorithms actually can be executed by a Turing machine because it cannot deal with arbitrary lengths. Here is a ruler-and-compass algorithm. This picture comes courtesy of Valgrind Reiser who is a professor in Berlin. So he''s heir of Petri, maybe you know Petri nets. He was the one who wrote a book on Petri nets. For years he argued with me whatever you can do with abstract state machines, what I can do with Petri nets. Then, at a certain point he started to write review and convert. He told me the story, one of the reasons for his conversion. He said when he took a course in standard course, where they explain you regular languages and context-free languages, and Turing machines. So, the professor started with giving ruler-and-compass algorithm as an example of an algorithm. Then, he went through finite machines, PDAs and Turing machine. He said we waited when he will return. So the last lecture I raised my hand and ask professor, "What about that ruler-and-compass algorithm?" Professor became very angry. So because you cannot put these things on Turing tape. Bisection algorithm or you can in-

--You can''t put them on paper either because there is no X. Arguably, I would argue that Turing had the right idea because it captures the fact that you can''t do this on paper.

--No. Okay. Let me see. For the Greeks, when you when you look at Greeks, they always speak about ruler-and-compass. Why? Didn''t they realize that there are ellipses and so on? They did. But ruler-and-compass was their computer.

--Their computer had noise.

--Yes.

--Right. So if you think of the algorithm abstractly and mathematically as dealing with real numbers, it''s not true. That doesn''t model the computer.

--Wait. We will arrive to this. True. Any machine has noise.

--Another dimension is about non-determinism.

--Yeah.

--Do you allow? Okay. I''ll let you, sorry.

--Yes. Coming. There will be special slides. In fact, here there''s a tiny non-determinism. There''s a choice between this solution and that solution. But in classical theory, they just ignore that. So, you can think bisection algorithm or Gauss elimination problem. Now, you can approximate them with arithmetization, but you don''t capture them as if, faithfully as is. So it turns out that, from my point of view, the problem how to axiomatize this classic algorithms was there from the beginning. So, in his beautiful paper, Turing does a lot of axioms. He doesn''t think in terms of axioms. But he says without loss of generality. So, for example, the very first sentence in the key section is we compute, I''ve forgotten exactly but, by pushing symbols around. So, from very beginning, he restricts himself to symbolic algorithms, ruling out for example ruler-and-compass algorithms. As he goes, there was only finite parts of the brain is involved. So, he made strange things. They''re all justifiable, but obviously there''s too many of them. I think the problem was difficult because they didn''t have a proper abstraction. That actually relates to your point. They were thinking about algorithms which actually compute one way or another, something more or less real. But the simple axiomatization turns out to be on a much higher abstraction level when you deal with abstract structures, say abstract structure of reals. That''s why you can deal with ruler-and-compass because the-, so this is abstract state. So, when I started to think about this problem, the first is trivial, second came to me sort of obvious thanks to just being a logician. Any mathematical reality which is static is a kind of structure. So, if you go to University of Washington, capture some mathematician and ask him what he is working on, some kind of structures. Topology, graphs or whatever, but some kind of structures. Okay. The third problem was difficult. It took me about 15 years. So, Kolmogorov thought about this. He said that steps should be of bounded complexity. Each step, not the number of steps. But they should be bound on the complexity of a single steps. So that was a challenge. So if you wish, then I can explain the axioms. But say, if you take this axioms, then every sequential algorithm is this. So, Short says that axioms probably are wrong. Now, I don''t know. But for all these years, 18 years, there''s a quite now rare substantial ASM community, and they tried all kinds of things. There are various people from mathematicians to engineers, varying engineering people, computer scientists. So, nowhere it came across an algorithm which wouldn''t, whether it''s very high level or low level.

--So, not just at the axiom- About using continuous functions.

--Yes.

--Which is some precursor at the problem. It is a hero statements then.

--Yes, because they all thought. Let me give another example. So, there was a school of mathematics called constructivists. Constructivists said that only computable numbers exist. They started to develop mathematics, alternative mathematics. Certain theories were false because you say that always there exist a maximum say Weierstrass theorem. But this maximum may be reached in a noncomputable real, and in their world, this theorem failed, okay? There''s a very interesting, famous, Hermann Weyl, the hero of quantum world, was a constructivist in his younger years, and he had a bet with Polya. So, he predicted that in 20 years, Weierstrass theorem will be false. So, this bet was done at the high university in Zurich, ETH. So, very famous people signed it. So, again, tell the story. I was in ETH visiting a colleague and I see something in German on his table and I see many famous signatures and say, "What''s there?" He said, "That''s the bet." So, he tells me this story, and I wrote a little. I have a column in some journals, so I wrote an article about this, asked somebody make a good translation. Then, after 20 years, that was close to war, I think in 1938, already with the war. So, Hermann Weyl came to Polya and said, let''s just forget about, or something like that. So, according to the bet, he was supposed to publish, the one who lost was supposed to publish in one of the best mathematical journals, acknowledgement that he lost the bet. But this never happened. Okay. So, what was the problem? So, I have another article, let''s say why constructivism is wrong. Is not another respectable approach to mathematics, but it just wrong. Why it''s wrong? Because they were fascinated by this reclusivity, by Turing compute ability. But in the real world is different. So, they made some beautiful theorems. They were there much before Complexity Theory and knew a lot, but they were not interested. They were fascinated by this level, and this the level where Roger says. So, in the beginning, they all were fascinated with this working on this level where everything is somehow feasible in the sense that you can execute steps in a feasible way. Does it make sense?

--Sure. But my takeaway on this is a general statement to make about this [inaudible] continuous [inaudible].

--So, there was no obvious contradiction. Depends what''s continuous. So, what he was ruling out, okay.

--You''re ruling out making an infinite number of changes in one state or a growing number.

--Yes.

--Unbounded number, growing number of changes and beyond, across steps.

--So, essentially, ruling out analog algorithms. That was his goal.

--Yes. But, number three rules out, not only analog, but the general state [inaudible].

--Say that again. I didn''t understand. What do you say?

--The starting point of my comment was that number three is a generalization of Rogers not before, but [inaudible] analog [inaudible].

--So, what is the definition of bounded complexity? I read the term complexity usually to me that depends on a model of computation.

--Afterward, come to bottom line because, otherwise, I will never finish. I''m sorry. I''m happy to talk about it. So, let me see. So, Nahum became one of the most active people in abstract state machine community, and he was interested in proving Church-Turing thesis, and in order to prove Church-Turing thesis, we had to go back somehow to the initial state. In our case, could be very rich. We rule no initial states out. So, for example, initial states may have a solution to the halting problem, or whatever. Then, of course, there was no chance for you to compute only Turing computable functions. So, you need to somehow cut it down, and this cutting down is relatively easy. There are some technical details. So, we formulated arithmetical axiom. Similarly, we could instead formulate string kind of axioms to improved it. Okay. Now, let me for a moment speak what numbers. What are numbers? Originally, there were just 1, 2, 3. Then, negative numbers appear, rationals. Zero actually came about very late. That''s why we have this one DC and one BC, and there is no year zero. But, by now, we have algebraic numbers, and reals, and complex numbers, and [inaudible] So, it''s not about quantities anymore, and we keep going. Okay. That''s quite an interesting, surreal numbers. So, the notion of number is evolving and widening. So, there''s not much you can say, that all numbers are such and such. It just analogy because as you''re familiar notion. So, same happened to algorithms. So, when Church and Turing wrote, the notion of algorithm was relatively robust. Everybody agreed more or less what are examples? In a way, you know when you see it. Everybody knew what''s an algorithm, what''s not algorithm. I don''t know any single example that people will argue, "Oh, no. This is not Algorithm". So, this was quite a robust notion. Not formalized, but robust. Now, where does it end? So, the notion of algorithm is evolving and widening, and getting evermore liberal. Surely, new species will be introduced. So, not much can be said about algorithms in full generality. So, that was my original argument. Why this, not Turing thesis, but this overgeneralized version of it to all algorithms. Why should that be true? Because it will say something about all algorithms, which is very unlikely seems to me. So, there was some contrarians very early. I asked Andreas who''s example Andreas was. Many people know. He thought it from Calmer, but I couldn''t check. So, it''s one of the very early examples of algorithm. Looks algorithmic, and obviously, doesn''t compute Turing computable recursive function. But-

--Depending on whether the universe is Turing computable?

--Yes.

--We don''t really know.

--It be could all finite, and therefore it''s all, wait a second. So, touching.

--[inaudible] do it better by such a big glow in the street.

--It wasn''t the right one [inaudible] example, right. Might have been Brooklyn. So, a non-determinism. From traditional point of view, non-determinism is non-deterministic algorithm is a contradiction in terms, because what''s an algorithm? Algorithm is a recipe, which we execute exactly. Now, if you come to the fork in the road, take it. What? Take what? What kind of algorithm it is? It''s a joke. Okay. I don''t mean the physical fork, the fork in the road. So, now today of course, they are ubiquitous. Also, opt interactive algorithms. So, algorithms are supposed to be isolated. That''s why you couldn''t measure rain, it contradicts isolation, but by now interactive algorithms are very common. Excuse me. So, how is non-determinism resolved? It''s by interaction. Now, imagine you have an algorithm or come to the fork in the road take it, so some operating system will flip a coin for you or does something. So, it''s interaction in this example with the operating system, but in general, there are interacting algorithms. In AI, there are learning algorithms, they often learn from people. So, they interact with people and they definitely algorithm, learning algorithms. Now, we can argue say, learning algorithm is not an algorithm. It may change its own program. So, in quantum mechanics, quantum mechanics actually is very different. Before quantum mechanics all this exam, in computer science, non-determinism may always is resolved by interaction with another agent. Obvious agent, computer, person. In quantum mechanics, you interact with God, with nature. So, when you measure, a measurement is collection of linear operators and when you measure by fear, one of the indices is chosen. So, here, we also have a couple of experts on quantum mechanics, which hopefully will keep me honest. Nobody knows it''s mysterious how it happens, but its how it happens. So, you interact with nature. But if you interact with nature and with people that opens up a lot of possibilities. For example, that same example, what''s wrong with it? It''s interactive algorithms. It interacts with nature.

--It''s not a matter of definition though because then you might say that, something is non-deterministic whats a really game. It''s playing against an adversary or you can imagine lots of processes that are not algorithms.

--A game is much more complicated because.

--When you have non-determinism in a system, you have to ask, as you say the question of, who is providing that?

--Yes.

--All right. So, is it an adversary, is it cooperative, and so on. So, all I''m saying is, you can define that to be an algorithm, something that interacts or you can say, no, that''s something else. It''s not clear. I mean, why would I?

--So, the point you are making actually is a point what I want to make. That''s a matter of agreement. Do we consider this algorithm or not? Probably not yet, but it interacts with nature''s the way measurement interacts with nature. Act of measurement is some kind of interaction. Now, there it''s mysterious and something happens, but as you measure, here it''s very openly. Now, you have to fix details, but that doesn''t mean a rain. So, one can argue that''s not quite an algorithm because is it raining or not raining, there are some, one can try to fix details, instead I am going to something else. So, let''s consider this medical machine, like CT scanner or MRI scanner it sucks you in and makes all kinds of measurements tests. So, in the process, it computes a function the input is your ID, who are you and the time it was done. Now, the output is whatever reports machine produces. The machines is completely automated algorithmic. So in that sense, I''m not making any claims, I''m not trying to convince you this is an algorithm. All I want to make, to shake your believe that we know exactly what algorithm is, so, do we want to consider this an algorithm. So, let me simplify a little, so that it produces just zero, one or minus one. It''s still can make all kinds of tests but it plays chess. Suppose you always white, and the machine is black, it plays and then it produces plus one, minus one or zero depending on whether machine won, lost or it was a drawer. So here''s one objection. So, it doesn''t compute because the result depends not only on your ID and time but depends on whatever you say in the chess game depends on your moves. Maybe but your ID and time completely determines the game. Historically, that''s the game. So, it determines it. Also, we don''t often require that all this additional data should be count as input. When you consider say, we have randomized algorithm, given two graphs, you want to determine whether they are isomorphic or not. Okay, you flip coins but the input is just a pair of graphs. So, it''s not an algorithm because with algorithm, my component says, you should take the same algorithm run again on the same data. Maybe but this has legislation, this is stipulation, you say every algorithm has this property. We never we agreed on this, why should be the case. So, the whole idea of sucking you in into the machine was so you cannot play two games. So, specifically tried to construct scenario which you cannot repeat, definetely violating this requirement. So, I''m coming to the end is a chess-playing machine an algorithm or not. I think out of this context and we discuss this, if you talk to somebody says, here''s an algorithmic machine some people may accept it as an algorithm, some not. My point is whether you accept, whether its algorithm or not, you probably accept that I just constructed this in last few days because I was preparing the stock. Some more clever algorithms will appear. Yes, here''s a guy, you studied deep learning. So, you have more interesting examples which I have no idea what they are. So maybe learning, from that point of view or the most interesting species. Unfortunately, I don''t know much about it. So, we have this two versions which great Shore confused. One is the regional Church-Turing thesis for traditional algorithms. The one they had in mind when they spoke about all this and that has been proven. Until now, I didn''t see any objection. So, in the very beginning, when it came with this I wrote practicals, anybody I knew, please attack, give me your critique. People seem to buy their -- now Shore Steel maybe right, now whether you take this generalized and overgeneralized version we still call it Church-Turing. They would not recognize it. So, you speak about a notion which is evolving widening ever more liberal notion and you want to claim something about this. Now, of course we can agree, we can stipulate, we can introduce new species of algorithms but if they don''t compute whenever they compute a function living absolute remark. In the beginning, mathematicians who did computations, they spoke about functions. Today, computing functions is tiny zero of algorithms don''t compute functions, they do tasks. They execute all kinds of things very few of them actually compute functions but nevertheless. So, unless we stipulate, I don''t think on any point in history it can be established in any way. That''s it. Thank you.

--Are there any reasonable restrictions on what the environment will do if they interact with algorithms?

--You have restrictions and what the algorithm can do in terms of finance bounded work and your critiques which phase were bounded by the algorithm.

--So, let me say two things. So, in full generality, no. I''ll tell you. So, Andres Blas and I we formalized so-called interactive algorithms. Our restriction was, that agents may be spread around the world, but when you sent me a message, it makes sense to me. Since my world is mathematical structure, what you sent is an element of this world. Or maybe few elements. Another word observability, but meaningful. In a moment I''ll say what I am trying to rule out. For this we proved. Yes. What exactly did we prove? That if you look at one agent who interacts with many agents, but whenever he has a message this message make sense not to him, he is an algorithm, but it makes sense in this world. It is an element of his structure. What he receives, or representation of an element of his structure. This were axiomatized. There''s another thing. I defined distributed algorithms, and at the time I thought in full generality, and it was used very successfully. So for example, there was ITU, International Telecommunication Union. They had some algorithm, which they programmed themselves, then they revised it. At a certain point the whole thing, their system became so complicated and they needed help. So, they wanted to take their system to specify precisely, which will allow them to move forward. So, they allow to all theorists in the world. Please do. Not only the same group, which consisted of German professors, not only they want competition but at certain point they were the only ones quite early who were willing to participate. Because this thing was, for example, you can do a lot of things in zero time. This was the physical time but a counter, as they realized. It''s never written. Just, in zero time you can do all kinds of things. So, what''s kind of zero time it is? So, they did it, and so I went to Stockholm, gave a talk to community of people whom I never knew on something I had no idea what it was. I just spoke what abstract state machines can do. Foundation for what these German Professors did. Then, there was not a single example in the world where it didn''t work, but in principle it couldn''t work. So, it''s pretty clear that there is no axiomatics for general distributed systems, and the reason is this. You can work on completely different levels. Let''s us first unreal world. So, you have very clever algorithm, you verified it. Somebody comes from outside, Russian hacker, changes bits. On level of your algorithm nothing happens, but on the level of single bits he makes a hole, and comes in. So, from the point of view of mathematical world of Agent one, the incoming messages have no meaning. This was in software engineering, they call the leakage problem. Information leaks down. So, if you realize that there is an infinite variety of abstraction levels. So, what I didn''t prove but I''m pretty sure the following is true, that if you consider one or maybe finite mean, but certainly one, so everybody works on the same abstraction level. That''s how distributed algorithms are written today, what people call distributed algorithms, that latter is all reasonably understood. The implicit assumption is, that we all work on the same level. So, when sending me message, it makes sense to me, or at least it makes sense in my world. There''s hands up.

--In terms of abstractions and isolation, and to me those ideas are more important to the idea of computing than this structuring idea of computing a function. So, in other words to me, computing is taking the physical world, setting up it''s initial conditions and boundary conditions, such that it''s behavior simulates an abstract system. In order to simulate, you have to have isolation. In other words, you can''t have information leakage from the outside world, so to speak, into your system. It has to be it''s own closed universe.

--Or a little more general in distributed case, you may have exchange, but exchange should be highly disciplined.

--Right. So, input-output or communication is secondary. But again, that input-output or communication has to be at the level of the abstraction.

--Yes.

--That you''re producing. It can''t have information that leaks in from outside. So, that idea of isolation and abstraction, I think is more fundamental to computation than the idea of computing a function, which is to say a tiny aspect of a computer.

--There could be language why I waited eight years, because Shore wrote his remark in 2010 somewhere. But I just recently discovered that it by Googling called Binging. Probably binging. Because I don''t want Google to know much of me and Microsoft already knows everything, there isn''t anything to know.

--Information leakage.

--Right. So, I was looking for something else and suddenly it popped up.