19:23 Lon Chaney's brilliant characterization of Quasimoto in hunchback of Notre Dame played on the audience's tendency to stereotype to prejudge individuals,
according to their appearance rather than understand them for who they really were.
It's not. Dictate. It's been nearly a century since the Hunchback of Notre Dame horrified audiences in the silent movie houses.
But have attitudes changed much in 95 years or do we still judge far too quickly the content by the cover.
Oh, how do we go from here.
Show me writer's block. The current project is untitled.
Continue. You may wish to review your outline, explore the suggestion box, or a few great writers methods. Sometimes a short break helps to stimulate ideas.
<subtitle id='00:01:45'>Perhaps you'd enjoy a pleasant walk.
Oh it's not that bad. Show me the outline.
Your next heading is: technologies impact on attitudes toward people with disabilities.
The subheading is the catalyst for change.
Okay, I need some examples. Where can we go for information about how attitudes and expectations have changed toward people with disability in the last, say, 15 or 20 years?
I have a list of consultants who provide data excerpts and related areas. There is also a media index called capabilities enhancement.
Good. Show me the index. hold a list.
Didn't you guys just finish an entire box of cookies?
Mom, the science thing makes you hungry. Besides it wasn't tired voice: You had some, too.
One, I had one, well, maybe two. Honey it's too early for dinner. I'm working against a deadline. Can you wait about an hour?
The lonely unappreciated scientific genius forges on, despite enormous odds.
Right. Happy forging. Bye.
Okay, so much for the hydrogen iodide.
One more unit was done.
No way to say one day things without an invoice is chlorine with this.
Don't do this to me, please.
Come more wondrous end of adventure. Trust me.
Show me this one.
Well, hi there, Cindy what's cooking today?
I'm cooking some strawberry cake for mom's birthday.
A strawberry cake for mom's birthday? Okay, first let's get our equipment.
A big spoon and mixing bowl and your special measuring cup.
The first thing to do is to put the measuring cup on the scale.
Good. Now more in 3/4 cup of milk.
That's it. keep pouring.
Cindy, you stopped. You need 1/4 cup more.
Here, I'll show you. Fill it up to the yellow line.
He closed it apart.
That's it. Keep pouring all the way up to the yellow line.
Good. Now, make sure you stir the batter until it is smooth and creamy.
Cindy, are you finished yet?
Did you get all of the lumps out?
Okay, now put the baking bowl into the microwave, and I'll take care of the rest.
I'll call you when it's ready then we can put on the strawberries.
(sprinkle, sprinkle, sprinkle, sprinkle)
Well, Cindy, that should just about do it.
I hope you had fun making Mom's birthday cake. I think she'll really be surprised.
Good. save this in the presentation folder. Show me the main index again.
It's not doing anything.
Good. We'll just submit the results and move on in next one.
No, we really should try heating up.
I can't believe this.
show me this one.
What? I can't hear you.
Sorry. I said, if you'll let my advertising deficit ride for this quarter, I promise to make it up to you in the fall.
Steph, that's what you said last year.
I know, but this year, I'm planning a huge promotion to get ready for the holiday.
I think we can at least double our normal business for the season.
How much stock would you need?
I'm estimating about 10,000 ... and I want to run four weeks of full-page ads in the mail out.
Four weeks, really?
I think we can work something out
I'll have to talk with my sales manager, but, yes, I think we can.
Bonjour madame, heureux de vous revoir. (Hello madam, happy to see you again)
Bonjour Pierre. (Hello, Pierre)
Puis je vous offrir un apéritif? (Can I offer you a drink?)
Would you like to drink?
No, no, thank you.
I think we'll just see the menu, thank you.
Baby, I know it's me
Impressive. I didn't know you spoke French
Well, I don't really, but I understand a little.
Wow, I think we just discovered hydro cloride as his partner. Now, I'm pouring.
Zack, I just realized something.
And, now for the hydrogen.
Zack, don't these halogens, as you go up the periodic table, don't they get volatile?
What were you saying?
Gentlemen, we can discuss this particular reaction tomorrow in the lab.
In the meantime, please, review the program on halogens before it means the end of civilization as we know it.
I'll make sure.. right away, sir.
As we move into the 20s, we begin an era of discovery. We're doing even simple things can bring great power.
Each new year brings the hope that even our bravest dreams may yet come true.
if Quasimodo is a symbol of the last century sometimes backward ways, then people like Stephane and Cindy are symbols of the future
Technology is not the only answer, but it can be and is a catalyst for change.
Steph. Let's shut down.
This document is unnamed. What is your title?
Hey, Einstein, you got any good ideas what I should call this thing?
The project I'm working on.
You didn't finish?
No, I just got started, really.
Chapter 1, that's not a bad idea. Okay, you guys, let's see.
that's my favorite Apple video.
My favorite quote from our next speaker is the best way to predict the future is to invent it.
I've got some bad news for some of you, though. Alan Kay is not going to invent the future.
That's right. That's your job. Alan's gonna help, though.
He's going to challenge you to forge some new links in that organic computer between your ears.
Please join me in welcoming Alan Kay.
Thanks for inviting me here tonight.
With all of these projectors. I usually rate a conference by how many video projectors there are.
Let me get rid of that guy.
Looking at the number of video projectors there are here I have to come to the conclusion that Apple must care about our developers very very much.
Or else John Sculley is going to give a talk tomorrow.
I've given talks to the developers before, and that makes it difficult because some of my favorite things I like to show over and over and over until people get sick of them I'm not going to show tonight.
I know. But what I'd like to do is to talk about the three things I've been interested in for more than twenty years about computing.
And they're how do you find what you need, how do you use it once you've found it, and how do you make it into something that's closer to what you wanted.
Those are sort of the three questions that got us out of the antediluvian age of the mainframe and time-sharing into what we're doing today.
And, there's another round of these things coming up in the future.
We've talked about them in various ways. You've seen videos of the Knowledge Navigator, and you just saw another agent-based one.
But, I thought one way of talking about it in perspective is to think of what we're trying to do is to extend human beings.
Human beings are inescapably technology-bound in the sense that we find it almost impossible to deal with the world on any kind of direct terms.
Part of it is because our brain can't contain the universe: what our brain contains is representations of the universe.
And those representations can't be the universe itself.
So, already we're at one remove as the other animals from what we like to think of as reality.
But we've gone much farther than that. We've put ourselves in many many degrees of remove.
We put clothing on. We put language on. We put lots of things on.
So everywhere we turn, technology is around except that we never think of it that way.
Technology is all of that stuff that wasn't around when you were born, right?
Stuff we were born with is not technically: clothes aren't technology. Language isn't technology.
Paper and pencil aren't technology. It's all that new stuff that's around.
Every time we're in the process of trying to invent technology for people, we have this dilemma, which is: if we make it look like stuff that's already around they'll be comforted.
But if it looks too much like the stuff that's already around they won't be helped much.
Each time we have to design. It's like the central question that you ask in education all the time, which is when should it be easy and when should it be hard.
You want to make it easy some of the time so the students won't be so discouraged that they'll give up.
But every time you make something easy, what you're doing is exploiting structures that are already there.
So, you're not building much there, but what you're doing is consolidating.
And then, if things are safe enough you can risk a challenge.
A challenge is going to be hard in various ways. And, some mental structure is going to be built.
And then, you need to make it easy again.
What the Mac did, I think, is to find a way to make some things that seem to be hard easy.
What we have to do in the next few years is to find a way of getting out of the kazoo range,
as one of the ways of thinking about it is in musical instrument terms, most musical instruments that are very expressive are difficult to learn how to play, like the violin.
The reason is is they have an incredible number of parameters that you want to really control in order to get that expression out.
The violin has this learning curve that is sort of like this. You have to climb up a thousand foot cliff over a period of two years before you get to do anything
Then, you start progressing in various stages. What we've done in the Mac is more kazoo like.
Taking something that people can learn to do basic operations within a few seconds.
And, with an enormous amount more difficulty they can actually create things themselves. So, we have this.
As long as people stay in a very simple level on the Mac everything is fine.
But then, we have these discontinuities of pulling them up to all of the things that the computer can really do.
The context, one way of thinking about it, is in terms of these extensions of humans.
And, the one we always think of are things like screwdrivers and wheels. Less often, we think of tools as language tools, as mathematics.
I like to think of all of these things as extension of the grasp.
The M word for me is manipulation on these things.
Even for things like mathematics. Mathematics is a way of taking things that are too abstract to deal with directly.
Make them into little symbols, bring them down, no matter how big it is or how small it is, we can make it into something that is roughly the same size
We can manipulate them in a way that is almost impossible to do in the real world.
The M word for the tool extensions I think of as manipulation.
Now, we have another way of extending ourselves over the last several hundred thousand years. It's a little more subtle.
we don't often think of it. That's by using agents. An agent is an entity that is going to be able to take on some of our goals structure.
We've used horses as agents. We use sometimes dogs as agents. The puppy will go out and bring the paper in.
Most of the agents that we've used throughout history are us humans.
We must like to do that. We're social creatures.
We have a strong propensity to want to take on other people's goals. And we also have propensity for trying to get other people to take on our goals.
And human society has built up at that. As Lewis Mumford said he wrote a great [book]...
He was actually sort of an architectural critic, but he also wrote more generally about the plight of humans. He wrote a book called Technics and Civilization, which he called these structures like the structure that we have out here tonight mega-machines.
He said, for most of human history, [most of the structures that] most of the machines that humans have created have other humans as moving parts.
So, we make cities. We make cultures. we have hunting groups.
We have all different kinds of things. These are a microcosm of the general human situation.
Of course, one of the useful thing about agents is that they can use a tool on your behalf.
You don't have to be around when they're using the tool.
What's even better about agents is that they can get other agents to do things.
Agents can proliferate your goals in a way that tools are not set up to do.
The M-word here I use as management. So, we manipulate tools, we manage agents.
I think of tools is something that we look at and manipulate.
And, agents are something that look at us, and we manage them.
It was a very different way of dealing them but they are the two main ways that we've extended ourselves over the years.
One of the biggest problems when computers came out is that the mainframe didn't look like either.
Mainframe was out of human scale. Things that are out of human scale, we have mechanisms in our brain that treat them religiously.
When it's out of human scale we start making up myths about it and there are priests and all the paraphernalia that have gone along with mainstream's over the years.
When people started thinking about, in the late 50s or so, they started wondering both:
One group of people started thinking about making things into tools, and another group started thinking about making them into agents.
Both of these ideas go back to maybe 1957 or 58. The first really good interactive debugger was was done at Lincoln labs around 1957.
The sage air defense system was done in the mid 50's.
The first pointing device called the light gun was used back then. McCarthy around 1958 wrote a paper about the advice taker.
In this paper, McCarthy, who is one of the founders of AI, like this is why he got into AI, said: it is quite obvious that in the near future (unfortunately John is still alive because this hasn't happened yet but see how optimistic everyone one was back then. To him it was obvious.).
He said, in the near future that we will soon be embedded in the midst of an information utility that is as dense and as one-for-one as our power and light utilities.
He realized right away that presented with a wealth of such resources that there'd be no possible way you could deal with these things directly.
So he said we have to have something like that I'm calling an advice taker, which is an artificial intelligence whose job it is to try and take on our goal structures and work on them autonomously.
And the way we will deal with it is we'll give it advice. In other words, we'll manage it. We won't program it. We'll manage it.
This started off (a very long-standing...) It's gone on for many many years now, the latest most interesting thing like what John McCarthy wanted to do is a project called Cyc, which is a model of human common-sense done at MCC by Doug Lenat.
If you're interested in I refer you to that work. he's also at Stanford and he's just written a pretty good book about it.
This goes all the way back to McCarthy's original ideas on this. It is a very hard problem.
But McCarthy's insight was very strong because we are going to be embedded in the midst of an information utility.
It's happening willy-nilly. AT&T could have done it after the divestiture, but they were frightened of the idea
Somebody told them they had a network, they said: "network? We thought we had a telephone!"
And, they fired the guy they had originally gotten in to do this project called Baby Bell, which was going to be a pervasive network in the early 80s so that people could write applications to.
But the point is that we're just now starting to go into a change as large, I believe, as the one from the mainframes to what you're doing today.
The question is how frightening is it actually going to be.
Well, I got a big surprise. I this is the first personal computer I did it was called the FLEX machine. (Must be a critic.)
it's called the FLEX machine.
I did it, inspired by the work of Doug Engelbart over here at SRI. Great guy.
This is one of the great things about our business is that we've compressed to 400 years of ordinary history of Technology into 40.
So, all of these great people who have these original ideas are still alive and we can tell them that...
you know, giving a testimonial to somebody after they're dead really stinks but, it's wonderful that these people, who are
real heroes because they did this stuff when it was really hard.
We think it's hard now but it's not even, not even close to what it was like back then.
I think of him as the actual father of personal computing.
He didn't do the first personal computer, that was done at Lincoln labs in in 1962.
But, he was the guy who thought about the users relationship to the machine in the way we think of today with with personal computing.
I think that's the most important part of it. So, I got really excited. The main bug in the Engelbart's thing is he tried to do it on time sharing.
You don't have enough cycles to do user interface the way it needs to be done. So, it was just getting, this is 1967 or so.
(The chip,) General Instruments I think it was a company that no longer is with us had just come out with a marvelous chip that have had 512 bits on it.
It's 512 bit ROM.
You know how we went to the moon? You guys have ever seen anything called core rope?
The way they program computers with read-only memory before then was they had magnetic cores which are about this big, and they were magnetized.
You program them by stringing wire through them. Each core would have as many wires as you could string through it.
Each wire was a sense wire.
The computer that the astronauts went to the moon on was a program by a tangle of wire of about two cubic feet that was held onboard of that spacecraft.
So, 512-bit rom was a big deal. (cement) We could do micro coating in a way that would not completely drive us crazy.
But, the problem with this approach and I think the approach of this middle way of going about it is
that Engelbart's user interface was too violin-like.
If you're willing to spend many many hours getting expert at it you could do truly amazing things.
There's a real discontinuity. In 1968 I saw a terrific system done at Rand which did hand character recognition.
That changed my whole notion about machines because Engelbart's way of thinking about it was that the mainframe is sort of like a railroad, and somebody needs to be Henry Ford.
We don't want IBM or these big companies telling us what we can do with a computer.
Everybody needs their own personal vehicle and that was a very powerful metaphor back then.
But the thing that struck me when I saw the GRAIL system was that the computer is much more like paper.
It's more like dynamic paper, and that changes every relationship because on a car we wait until people are 18 or so until they learned how to drive, which is really ridiculous when you think of it that's the most dangerous period to learn how to drive.
10 year ten-year-old kid is much more sensible.
This idea that if the thing is media, if it's like paper than it should extend in the world of childhood and it's a completely different relationship.
It's an intimate relationship. So, we wind up with these neat slides that Larry Tesler did a couple years ago which is a way of characterizing these three major ways.
You can think of it as like the institutional, the mainframe computer is sort of like the Ptolemaic system of astronomy.
The middle one is sort of like Newtonian physics.
The one on the right hand side is maybe the theory of relativity or something modern.
The important idea that these are huge. They are not progressive changes that are just about computers getting smaller.
They're actually changes in point of view. Big changes in the relationship of the user to the machine.
I'll just give you one example: if you take the mainframe, 307e glass teletype screens way of doing things,
and that extends to the IBM PC because the IBM PC was sort of a way of doing a small mainframe without adding any new insights into how you were to interact with it.
The basic idea of user interface on these machines is to think of it as access to function.
Machine has their function keys, control keys.
What we want to do in the user interface is do access to function.
A lot of people who are trying to fit MS-DOS applications to MS Windows are simply mapping function keys into pulldown menus.
They think of it as access to function. Now, it's not with what the Mac, in this middle category, is about at all.
What the Mac is about is making the users aware of what the possibilities are.
It's number one task is to gently teach you all the things that it can do and make you aware each time what can be done next.
Putting things into pulldown menus as a primary strategy is a terrible one.
When I see a Mac application that has no visible menus, I say: "uh-oh, this thing probably ran on an IBM PC at some point and they're just trying to put Bearnaise sauce on the hotdog.
But you have to do more. You have to change the users relationship to the system and realize that your major task is to have the users learn as they go along.
Just as that is a such a large revolution that most of the people who are trying to imitate it don't understand it.
We're just starting on the next revolution which is going to be equally cataclysmic because the computer that goes wherever we are will probably not even have an on/off switch.
It's not going to be a standalone laptop. It almost certainly will be hooked into digital cellular.
It will not just be your phone, but it will be constantly trickling down information by means of agents.
So that you will rarely have to do a proactive thing on the computer.
You will have a kind of panorama of the up-to-date resources that you need as you go along.
So, this change from reactive, my thesis in 1969 it was about the FLEX machine, I call it the reactive engine. The Mac is a reactive engine.
But, what we're going to have in the next few years is a proactive engine.
That's a proactive engine that's going to be embedded in a pervasive network.
Not like local area net, not sharing files, but ones in which we are going to get resources not in a computer store
But, somebody in Timbuktu is going to write us an application.
They don't know about us but they're going to write a component that is exactly what we want and our agent is going to find it for us.
It better not be in Sanskrit when it shows up. So we have this problem.
How do we find things? Right now, we find them by going to a store or that's usually on a floppy, sometimes on a local area net.
How do we make use of those things once we found them?
And, how do we make them more into the thing that we actually wanted?
Those are the three questions that I want to ask over and over because I think these are the driving questions for the next 10 years.
And, that the major difference is that we're asking those questions now in the Macintosh.
A few years from now, we're going to have to start seriously asking them about an entirely new way of doing computing.
So, these are a couple of Larry's examples, which I think are particularly nice.
People worried about response time on the mainframe.
3090. Today typical 3090 in a public utilities that I'm familiar with, you get 0.05 MIPSs per user.
It has lots.. that, you know, it's about a 30 MIPS machines but there are 450 terminals on it.
What they worry about when they do software on it is what they call path length.
(because) Does anybody in this room ever heard the expression path length? Right.
I rest my case. Path length is all a talk at about in companies like Arthur Andersen and IBM.
Path length is how many millions of instructions have to be executed by the mainframe before you get a response back to the terminal.
Right? We don't worry about that. Instead, what we're in is a horsepower race.
Most of you here are too young to remember the 50s. This is before the gas Wars. In the 50s, we had these great old Dodges with 450 horsepower.
They got about 6 miles to the gallon, and they made a lot of noise.
You could lay a strip of rubber a block long and these things. That's where we are right now.
Everybody is sort of thrusting around with a number of MIPS that they have.
But, believe me, it's irrelevant. Totally irrelevant because all we're going to we're worried about five and ten years from now is access.
We're gonna have tons of MIPS. 50 to 100 MIPS in 1995 easy.
Much more than that, actually. (It's) We're going to be drowned in MIPS.
We won't know what to do with the MIPS. In fact, the MIPS are going to disappear from our view just like path length is something that we no longer think about
We're going to worry about can we find what we need out of the trillion objects or more that we're connected to dynamically.
Can we find it?
Integrate information, what is something that is closer to home. We don't on a mainframe.
The Mac does it by cutting and pasting, and I think everybody believes that what we need is dynamic linking of various kinds.
System 7 has a way of doing dynamic linking. But, dynamic linking is more than just passing information from one place to another.
Philosophically, dynamic linking is much more important.
Dynamic linking means I have more dimensions in the computer than I ever have in my information space.
You have this wonderful wonderful thing you can do.
If you give me two dimensions and I make a spot here in a spot here I've generated the idea of distance.
If you give me one more dimension I can always get rid of the distance.
And the computer is that thing that always has one more dimension than any dimension of data that we have.
One way of thinking about linking is in the classical way of thinking about linking.
But a stronger ways to thinking of it is, what you're trying to do, is get a higher dimensional space on the data that people are interested in then the data has itself.
If you can do that, then you can get all of the relevant stuff seem to be in the same place.
A critical insight first had by Engelbart.
Feel of interaction. It feels like editing on a 3270 or an IBM PC. Layout on the Mac, you're moving around 2d things.
Larry says orchestration, but I like to use the word conducting because what we're going to be manipulating on the in this next revolution is going to be active, proactive, objects not passive ones.
Issue commands. Again, think of Institutional, well, you have to remember and type.
You know, there are all of these SOP manuals.
I just had this amusing experience now I was spending thirty five minutes trying to take a Mac portable out of Apple.
And, I had already signed for it in one place, but nobody had thought to give the guards a Mac with a link into the database that happened to know that I'd already signed for it.
We don't use badges down in (..) Now, we don't use no stinking badges down in Los Angeles.
In fact, nobody knew that I was kosher. It took a very very long time to get the thing.
If anybody who'd like to steal a Mac portable, I can tell you it can be done in 35 minutes.
This progression is coming from remembering type, seeing point on the Mac, visual, and then ask and tell and gesture.
Going from institutional, personal, to intimate.
Groupware is big. I think this is probably one of the topics of conversation developers are starting to bitch at Apple about servers and networks. Keep on.
Now, because you have to realize that we have a symbiotic relationship with you.
You are making our company by providing the content that runs on our machines. We're trying to help by doing some of the research.
But in order for us all to get into this third phase and not leave it up to somebody like the Japanese to do, we have to be willing to turn the corner at the same time.
And so the more the developers agitate for support for getting into this third way, this third paradigm of computing, the more easy it is for Apple to decide to do it.
It works both ways.
Here's an interesting one.
The first thing probably is foreign to you, but it's what goes on in the other world out there. That world slightly to the east of the San Andreas Fault.
What happens is companies like Arthur Andersen get hired by a public utility as an example to do a billing program.
They sit down and they have these design tools, and they figure: "hmm, this program is going to take about 1.6 million lines of COBOL to do."
this is an actual example now. I can't tell you that I'm on Arthur Andersen's Advisory Board.
So, I get to see stuff like this. "1.6 million lines of COBOL, and we'll take about 250 people three years to do.
And, that will be 22 million dollars, please." Okay, they do this all the time. Their gross is about 1.5 billion.
An enormous large number of that comes from these kinds of jobs. 3090 mainframe 450 terminals. Got to have a good response time and all that stuff.
It's unbelievable what they do. They use these CASE tools. CASE tools are an orthotic brace for hopelessly crippled patient.
They come up with these custom applications. Now, in this middle personal area, the Mac sort of acts like it's object-oriented on the outside and a little bit less object-oriented inside.
But, we're all supposed to be thinking object-oriented. So, it's a whole different way of doing things.
Just to give you an example, because the Arthur people are interest in object-oriented stuff, I said: "Well, instead of doing this thing with 1.6 million lines of COBOL, why don't you try and convince the public utility to try doing at an object-oriented form, and let's see what happens.
So, they did and they had some people who've been doing object-oriented programming for five years at Arthur.
Now, the problem was there wasn't any object-oriented system on the mainframe so they had to build one.
So they prototyped it in Smalltalk. And then, on the mainframe they wrote an object-oriented environment using PL/1 macros, and implemented this billing system.
Here's the interesting thing. A year and a half later, they were done. They did it with 30 people.
And, the code size was 110,000 lines of PL/1 code, counting the code they had to write for the environment.
So, that's a factor of about 14 and a half reduction in code size.
A year less, and about a factor of 8 less people. They were able to certify the system in a month, instead of a year, because they're able to make changes and additions and fix bugs and stuff much more quickly.
This project is now being written up as a case study.
This is an amusing project to me because it is, simultaneously, the smallest mainframe program that has this functionality, and the largest object-oriented program that has ever been written.
Anything that's over an order of magnitude should catch our attention as a way of dealing with things.
The thing that shocked me about this was not that they got some improvement in code size, but the fact that the improvement was greater at that size of code.
Basically, probably it says something really ugly about COBOL.
Maybe then, instead of something good about object-oriented programming but I was aghast because I've seen typical factors of ten improvements on what I think of as large programs but are very small by this scale.
But, the idea of having it go actually up to almost a factor of 15 on a job this size was shocking.
And, we're now getting it at this position where we can't afford not to do object-oriented programs on the machine we dearly love namely the Mac.
We have to do it.
This is not a plea for Apple events, but it's just to get you to realize that what Apple events are about is to try and find an object-oriented protocol for things that are big and ugly inside, but are nonetheless going to have to be treated as objects in order to make progression as far as integration is concerned.
Now, the important thing is if you take a look at the right-hand panel there, what we see is not generic tools.
The generic tool is like a spreadsheet, like a desktop publishing system.
We can't afford to have those anymore guys. We cannot afford to have one company like lecture set try and do all the tools you need to do to do desktop publishing.
You can't do is like Lotus'es failed attempt with Jazz.
You just can't build in all of those tools.
What you really want to do when you do a system like that is make an operating system for doing desktop publishing.
It's a Finder for desktop publishing.
You want, you can build in a few tools, but you should let fourth party developers build in those extra fancy tools in the toolbox.
Think of what it actually means. You know, the person who comes up with the best one of these is always going to get bought.
That's the operating system part of the thing.
But, you can spread the risk, and widen the functionality by getting other people to develop tools for yourself.
This is why you have to go object-oriented. You don't want to go object-oriented in the way that Smalltalk went many years ago.
Smalltalk is almost 20 years old now. It's a middle kind of object-oriented system.
You want to go object-oriented in the way on the right, which is towards components.
Let's think of what the destiny of a component actually is.
Well, it's to me, if you move a piece of data from one place to another, you're doing all sorts of awful things.
Because you're giving the receiving end permission to zero out any of the fields. Data is quite unprotected.
You're requiring the receiving end to write a lot of code to understand the data and use the data.
If you ever change the format of the data you're requiring the receiving end to be able to take on transmitted changes of those formats.
This is really ugly. It's a terrible way of doing things.
If you move an object, classical object, from one place to the other, all the important code that knows about the internal formats and stuff goes with it.
This is the promise of object-oriented programming. But you still have to write code in it's receiving end.
In order for that object to be a component, it has to adhere to some sort of standard protocol.
That's what Apple events is supposed to be about. But I think you can see that that is not going to last in the world of pervasive networking, right?
It's so hard, you know, the joke is that the only people smart enough to do a standard are too dumb to do a good one.
So the idea that we can go into the world of pervasive networking with upwards more than a million applications, hundreds of millions of things that are going to be transmitted around, and have them adhere all to the same protocol was farcical.
So what do we have to do? Well, these components have to be self configuring.
They have to be things that when they are sent to a receiver, they can configure themselves, so they may have a completely different protocol than the receiver can do.
But the receiver and the sender have to be able to work out what the protocol is going to be.
That means that the components have to be much more self describing, even than objects are.
This is a real challenge. It has not been done successfully. At the end of this talk, I'm going to show you a couple of examples of some experiments in this because I think it's a really interesting way of thinking about the future.
Finally, what's the key to all this stuff? Well, if you ask somebody for a screwdriver, and they just give you this, you would get very angry.
They say: "what's wrong? You know I'm giving you a screwdriver this is a mainframe. This is the functional part of it. What are you complaining about?"
You say, "well, no. I want the user interface.
Because if I don't do that, I'm not going to have a tool. Now, when I was making this slide, I looked at this and it occurred to me that this was the dumbest design for a screwdriver that I'd ever saw.
I'd never looked at it before. I was thinking that wow the mechanical advantage is the ratio of the diameter of the handle to the diameter of the shaft.
That's small. I get the most purchase on it by grabbing it like this, but I do it like this it slides off the screw.
If I hold it the way it wants me to do I get very little leverage.
As I started thinking "what should a screwdriver look like?" What should it look like?
A ball. Does anybody ever seen a screwdriver like that? Yeah. Somebody said this is...
Now it now what's interesting, I have a book from Diderot encyclopedia, which is published in France in the 1700s.
And, there are screwdrivers in there that look exactly like the ones we buy in Sears today.
So this is the MS-DOS of screwdrivers.
But the moral of the story is that screwdrivers were made like this for literally hundreds of years.
Hundreds of years, and nobody thought "gee, it should look more like a ball", until recently.
That's what we have to always be looking at.
Just because something has been around for a long time. just because C has been around for a long time.
What you want to ask is this thing an old kind of tools, or is this a new kind of tool?
User interface is the key here.
The important idea in the user interface is that you can change the relationship of the user to his knowledge by giving them a different kind of representation system to think about it in terms of.
That is a key notion. It is incredibly difficult to deal with numbers in terms of Roman numerals.
I was just telling somebody today only the nerds in 60 BC could multiply numbers together. right?
There in any population, 5% of the people are natural-born nerds for whatever it is.
We're all in this room together. But the other 95% have other, more reasonable, more balanced pursuits.
The way of getting them to do multiplication two [numbers] is to come up with Arabic notation that has a simple algorithm and then everybody can learn how to do it.
If you want to communicate with an animal, then come up with a representation system that will link what you want to communicate with what the animal can deal with.
This is an example of a chimpanzee doing a symbolic language in terms of icons that work quite well some years ago.
So that's what we want to get interested in. So let's just take a look at a couple of examples here that I think are important to consider.
the first one is how do we find our resources today. Well, generally we go into a store, sometimes we get it on the network, but we don't have an interesting way of finding things today.
How do we make use of what we found.
Well, we have a user interface that seems to work pretty well. Let me give you an example.
This is a 22 month old little girl and her her mother is my accountant.
Both their parents, when this was taken, worked at home. Each of them had a Macintosh.
When I found out the little girl was interested in computers I gave her an Apple II, which she rejected.
This is about 1985, and I originally used this video to try and convince Apple that they should put a hard disk in every machine.
So, I want to warn you that this is not a first time user you're looking at. So, don't be impressed here.
She's been using the Mac already for about six months.
Now, even though this interface was originally developed with children in mind, it was still a little startling to see a 22 months old use it like this.
But of course, why shouldn't she be able to use visible menus and MacPaint.
That seems reasonable enough. So, she starts doing things, and. I said: "all right, I believe this. This is not too impressive."
But what happened next really amazed me. She wants to get a clean sheet of paper.
So she goes up to the close box of the window, and she saves her old drawing using the pop-up and then she goes to the pulldown to get a new one and she's off and rolling again.
This was sewing that doesn't make you want to buy a Mac.
that's what's great about children. You can make them. you don't have to buy them.
This was so intriguing that we took about another nine hours of her doing various things on the Mac.
And, we discovered that she was about 70% literate in the Mac user interface.
She could even start up a really hard application like PageMaker.
She could make some marks in it. She could print the marks out. She could save them, get them back.
About 70% of the things you expect from one Mac app to another, it was already part of her vocabulary at age 22 months.
The mouse for her was about the size of a brick. But because of it, it was actually more stable than a pen.
She couldn't really use crayons yet but she could use the mouse in a in a sensible way.
Our solution to .. how do you use it once you've found it is that we want the Mac applications to be similar so that when you learn one you've learned 70% of the next one.
Every time we do that, we're actually helping not just ourselves but we're helping our colleague developers, because you guys are really colleagues you aren't competitors
Every time the synergy goes together, especially when you start doing components for each other, it'll be (much more) something where everybody will be much more of a win-win situation, because every time somebody does something that can act as a repository for other things everybody gets to implement towards it
Now, in this world, how do we make it more like what we'd like it? that is really hard.
Hypercard, we can go in and look at a script and move a button around.
All the applications you guys are doing are not that way.
You ever opened the hood of a Cadillac? If you wanted to find the carburetor you wouldn't even know where to look for it even if it had one.
It can't fire off one cylinder without energizing about a hundred thousand transistors.
There's nothing like a Model T. That's what it's like if the users are allowed to pop the hood of your applications.
There's very little that they can do to customize it and yet every user that you have within the first month of using your tools has ideas about the way it should actually be.
You give them something like an application written in HyperCard you can watch them making little changes.
Maybe one or two changes a week is all they do, but after a few months, they've got it much more like the way they would like it to be.
In order to get ahead, you're going to have to start thinking about making your applications that way.
Because that will give you the possibility of doing something like this.
This is a an application like MacDraw.
This one you might only spend 99 bucks for at Egghead. But in fact, this movie was taken in 1975.
This application, not the drawing here, but the application was designed and implemented by a twelve-year-old child.
So, this is an example of end-user programming. This is almost impossible to do in HyperCard because she's not using pre-done graphics primitives here.
She did the graphics programming herself as well.
The entire amount of code she had to write for this was about 50 lines of code.
About one page of code to do this application.
This horrifies college professors when I suggest they do this as a term project for their beginning programming class because it's too hard in pascal.
But, this is the kind of thing that when you think of the system ahead of time as being something that the end user is going to do more with and just twitch parameters you've said then you'll design it in a different way.
What you'd like to see is even if you've got a Cadillac down there, and most of you guys have Ferraris, written in things much worse than C, screaming machine code.
Even if you've got all that stuff down there, what you should start thinking about doing is give the user a Model T version of it to look at.
When they pop the hood there's a schematic version of what you have optimized down below and let them change some of those things.
It'll change their entire way of dealing with the Machine and thinking about you.
What about agents? Similar to the screwdriver, if you want functionality with an agent you get one without a head.
You need language and context to go along with it, and finally you get an agent who can actually do something for you.
The important thing to remember is that the agent is watching you. You're not watching it so much anymore. It's watching you.
And, you want it to track your goals in an extremely high-resolution way.
I thought for quite a while and what can I show you that will illustrate an agent track and goals.
And, I found an agent done at MIT that I think illustrates this idea.
Here, the computer is playing the harpsichord.
Well, that's impressive, but of course the flute player didn't make any mistakes. That ain't the way it really goes.
How did that work? Well, the computer had a model of the whole set of goals to be accomplished, including what it was supposed to do, and what the human was trying to accomplish.
It was tracking both of them, and making some judgments about what the guy was playing expressively and so forth.
But you'd really like this thing to be extremely robust in terms of errors.
So here's an example of that same flute player now playing much more like an amateur, or the way we would be actually dealing with this situation.
You can imagine the computer say: "what is this guy doing?"
You think about what was going on there it's fairly sophisticated, because it doesn't know whether he's playing expressively or he's made a mistake.
It doesn't know where he's going to go to to pick up again. So it has to kind of noodle around, be looking pattern-matching ahead in the immediate space of where they were trying to figure out when the guy is getting back on the track again.
That's precisely what our agent based software is going to have to do.
This is one of the first tasks. The interesting thing about agents is they don't have to be terribly smart.
This one is not terribly smart, and it works quite well.
One of the first agents I designed is one that simply stayed up all night and found you the newspaper you'd most like to read next morning.
It did it by looking at 12 different databases, spending hours sifting through things.
When it found something that looked interesting, it had a video disc with 45,000 pictures of famous people on it.
So, if Mitterrand was mentioned in an article, it would find a picture of Mitterrand. Another one had 37,000 maps on it. So, if Paris was mentioned it could find you a map of Paris.
You know would do the kinds of things people do and they gather newspaper articles, and would present it to you as a laid out thing next morning.
The headline might say new fighting in Afghanistan, or it might say your three o'clock meeting was cancelled today because one of the news sources that went after was your own electronic mail.
So something important to come in at night, the agent would recognize "oh this is important I'd better make it a headline."
This is one of the hundred most interesting things I found tonight.
You can imagine a sidebar saying your children slept well last night, right?
The idea is that every time a technology comes along it redefines news.
The major thing about agents is not how smart they are. This is why in some sense these futuristic videos we do at Apple are misleading because they always show these incredibly smart agents.
It's what what we call the Beethoven complex in AI, which is the belief by most AI funders that you have to do something much better than a human before it's interesting.
whereas in truth we can't even do a decent cat yet.
The important thing is to realize that it is not the intelligence of the agent that is making it so useful, but its ability to act autonomously.
The places where it can best act autonomously are in information retrieval over enormously large possibilities.
Possibilities that are so large you would never be able to do them by hand or ever want to.
I believe that the first agents that are going to be commercially viable are going to be all of that kind.
They won't have a guy with a bowtie. they won't need a guy with a bowtie.
You know, eventually at some point, when the agent gets smart enough to start making you think anthropomorphic thoughts you're going to have to do something to let the user understand at what level of intelligence this thing is operating at.
So you will have something like a cartoon character.
But initially, the most important idea is that agents can do things for you while you're somewhere else.
It's taken an amazing number of decades to do the simplest agent that anybody ever thought of.
That's the one that automatically downloads your mail onto your hard disk while you're doing other things.
It's unbelievable to me. The first good mail system I ever used fifteen years ago did that.
This is sort of an Apple problem in a sense is that the the people who did Apple link refused to admit that most Mac's have hard disks on them.
Because of that, they make you wait for the modem, and wait for the modem.
You don't have to wait for the modem. An agent can download the stuff, and have it ready to go, and it can even do some of the reading of it while you're doing.
Why don't one of you guys do one of those things.
In credit, you know, because you can do agents now. So start doing it. We need them.
The only way we can get better on this stuff is to have a lot of things to you and to criticize.
It's the most important aspect. So, this is sort of an old farts talk I realize.
I'm working on my old farts merit badge.
But the way we make progress here is by trying to go for the good stuff, trying to find a romantic ideal, and realizing and having faith that the money will come by doing something good.
That's how the Mac got there in the first place. The Mac was a romantic ideal of trying to change people's relationship with the computer.
Now, we're all making money on it, but if we just set out to do money we would have done another MS-DOS clone like Compaq does.
The same thing is true for this stuff. We can actually push into this new era and be one of the leaders for this new way of proactive computing.
And we can do it starting now on the Macintosh with your help. As always the strongest weapon we have to explore a strange new world like this is the one between our ears, providing us loaded.
Thank you very much. Thank you.