I sort of want to talk about machine learning and intelligence.
I spent a bunch of time over the last few weeks messing around with Unity’s machine learning tools. I won’t be putting together any kind of tutorial or forwarding any real actionable info here, because honestly I don’t think I am in any position to offer any. I have a better idea how ML works and what sort of problems it seems to be good at, but I think that there are some very harsh limits to what machine learning is capable of. I also think that there are a lot of people out there who are quick to overlook those limits. There is this commonly held belief that machine learning will be the thing that makes computers smarter than humans. Let me tell you, Skynet, this ain’t.
If you aren’t familiar, machine learning or neural net computing or tensorflow ai or evolutionary algorithms, or any of a million other names, are ways of getting a computer to solve problems by iteratively teaching itself. Rather than getting a programmer to study the problem and come up with a generalized solution, machine learning is a method of showing the system the problem, giving it a desired goal, or goals, and then leaving the actual problem solving up to the software. It just sort of fumbles forward until it comes up with a solution that gets to the goal in as efficient a way as it can. It might not be the best solution, but it will be fairly efficient and all it will have cost is computer time.
If they were to read that last paragraph there would, no doubt, be a bunch of machine learning researchers tripping over themselves to tell me what I got wrong. There would also be another gang of researchers behind them eager to point out what the first group got wrong. It seems to be that sort of field. Everyone is pretty sure that it’s great, but none of them really know how it works. When they tell you they know how it works they are usually wrong. What they probably won’t say is that machine learning solutions are ‘smarter’ than humans.
Let me rephrase some of that. They know how machine learning works, like technically how it functions. Maybe it would be better to say that they disagree on how to make it work. Or how to make it work well. Those are really just nuances. Academic inconsistencies.
I used machine learning to teach a marble how to not fall off a track. Not how to get anywhere. Not what anything in their environment is. Nope. Just how to not fall. They don’t even avoid the edge of the track very well. They like getting right up beside the edge and not falling off. Many hours of training over a few weeks, and they will try their damndest to not fall off a track. At this point they usually succeed.
I made a machine learning agent reproduce a certain behavior that I was looking for in a variety of situations, but I don’t know thing one about machine learning. I do know this. It’s not smarter than a human.
The human brain is a massively parallel, analog, electro-chemical, comparative decision making system that never, ever stops running. Sleeping, still running. Chemically unbalanced, still running. Physically compromised, still running. Nothing short of death stops an animal brain from running. What’s more, human societies have developed high density communication systems that transfer information from one individual’s brain to another. These communication systems developed over hundreds of thousands of years. The amount of information conveyed in a simple interaction between people is absolutely staggering, but of course we take it for granted, because we are humans and we are uniquely equipped to be able to decipher that much data presented in that specific way. Tone of voice, cadence of speech, flutters of the eyelids, physical gestures of all types. Communication as dense and varied as there are groups of people to engage in them. Nothing in the realm of machine learning systems even comes close.
A computer beat several top level players at Go. That doesn’t prove that machines are smarter than humans. It proves that humans built a tool. If I use a wrench to turn a nut, that doesn’t mean that the wrench is better than my hand. It means that people have created a tool that solves the problem of enhancing grip and leverage. Wrenches are crap at shuffling cards. I can’t use a wrench to solve a rubik's cube or type this post. It is a tool that enhances human ability. The machine that is good at playing Go is just that. A machine that is good at playing Go, because people wanted to make a tool that was good at playing Go. It can’t tell me if the milk in the back of my fridge has gone bad. That would require a different tool. Maybe machine learning could be used to make it. That would not and will not make that machine smarter than the human who wanted to know about the state of their milk.
Let me also be clear here, this is not because I think that there is some intangible, fundamental, superiority of humans over the machines they create. Not even close. This is just a matter of time. Human brains run constantly and have run constantly for hundreds of thousands of years. Animal brains for hundreds of millions before that. And those brains don’t run slower than computer circuits, just different. All brains have done for over 500 million years, is figure out how to solve problems. You could throw all the Nvidia GTX cards you want into the machine learning arena, they just can’t compete with that much iteration time.
I think what a lot of folks who tout the idea of a generalized AI that’s smarter than people forget is that people created Go. It was people who created a game with such a wide possibility space that they themselves couldn’t competently calculate it. They created a problem they couldn’t solve and then hammered away on it for a couple thousand years because it was fun.
This isn’t the sort of species that you just surpass because you built a machine that can do multiplication real good. When it comes to being clever, humans are certifiable badasses. Problem solving, Iteration, intuitive lateral thinking. Really nothing tops us. We’ve just been doing it longer.
I spent a bunch of time over the last few weeks messing around with Unity’s machine learning tools. I won’t be putting together any kind of tutorial or forwarding any real actionable info here, because honestly I don’t think I am in any position to offer any. I have a better idea how ML works and what sort of problems it seems to be good at, but I think that there are some very harsh limits to what machine learning is capable of. I also think that there are a lot of people out there who are quick to overlook those limits. There is this commonly held belief that machine learning will be the thing that makes computers smarter than humans. Let me tell you, Skynet, this ain’t.
If you aren’t familiar, machine learning or neural net computing or tensorflow ai or evolutionary algorithms, or any of a million other names, are ways of getting a computer to solve problems by iteratively teaching itself. Rather than getting a programmer to study the problem and come up with a generalized solution, machine learning is a method of showing the system the problem, giving it a desired goal, or goals, and then leaving the actual problem solving up to the software. It just sort of fumbles forward until it comes up with a solution that gets to the goal in as efficient a way as it can. It might not be the best solution, but it will be fairly efficient and all it will have cost is computer time.
If they were to read that last paragraph there would, no doubt, be a bunch of machine learning researchers tripping over themselves to tell me what I got wrong. There would also be another gang of researchers behind them eager to point out what the first group got wrong. It seems to be that sort of field. Everyone is pretty sure that it’s great, but none of them really know how it works. When they tell you they know how it works they are usually wrong. What they probably won’t say is that machine learning solutions are ‘smarter’ than humans.
Let me rephrase some of that. They know how machine learning works, like technically how it functions. Maybe it would be better to say that they disagree on how to make it work. Or how to make it work well. Those are really just nuances. Academic inconsistencies.
I used machine learning to teach a marble how to not fall off a track. Not how to get anywhere. Not what anything in their environment is. Nope. Just how to not fall. They don’t even avoid the edge of the track very well. They like getting right up beside the edge and not falling off. Many hours of training over a few weeks, and they will try their damndest to not fall off a track. At this point they usually succeed.
I made a machine learning agent reproduce a certain behavior that I was looking for in a variety of situations, but I don’t know thing one about machine learning. I do know this. It’s not smarter than a human.
The human brain is a massively parallel, analog, electro-chemical, comparative decision making system that never, ever stops running. Sleeping, still running. Chemically unbalanced, still running. Physically compromised, still running. Nothing short of death stops an animal brain from running. What’s more, human societies have developed high density communication systems that transfer information from one individual’s brain to another. These communication systems developed over hundreds of thousands of years. The amount of information conveyed in a simple interaction between people is absolutely staggering, but of course we take it for granted, because we are humans and we are uniquely equipped to be able to decipher that much data presented in that specific way. Tone of voice, cadence of speech, flutters of the eyelids, physical gestures of all types. Communication as dense and varied as there are groups of people to engage in them. Nothing in the realm of machine learning systems even comes close.
A computer beat several top level players at Go. That doesn’t prove that machines are smarter than humans. It proves that humans built a tool. If I use a wrench to turn a nut, that doesn’t mean that the wrench is better than my hand. It means that people have created a tool that solves the problem of enhancing grip and leverage. Wrenches are crap at shuffling cards. I can’t use a wrench to solve a rubik's cube or type this post. It is a tool that enhances human ability. The machine that is good at playing Go is just that. A machine that is good at playing Go, because people wanted to make a tool that was good at playing Go. It can’t tell me if the milk in the back of my fridge has gone bad. That would require a different tool. Maybe machine learning could be used to make it. That would not and will not make that machine smarter than the human who wanted to know about the state of their milk.
Let me also be clear here, this is not because I think that there is some intangible, fundamental, superiority of humans over the machines they create. Not even close. This is just a matter of time. Human brains run constantly and have run constantly for hundreds of thousands of years. Animal brains for hundreds of millions before that. And those brains don’t run slower than computer circuits, just different. All brains have done for over 500 million years, is figure out how to solve problems. You could throw all the Nvidia GTX cards you want into the machine learning arena, they just can’t compete with that much iteration time.
I think what a lot of folks who tout the idea of a generalized AI that’s smarter than people forget is that people created Go. It was people who created a game with such a wide possibility space that they themselves couldn’t competently calculate it. They created a problem they couldn’t solve and then hammered away on it for a couple thousand years because it was fun.
This isn’t the sort of species that you just surpass because you built a machine that can do multiplication real good. When it comes to being clever, humans are certifiable badasses. Problem solving, Iteration, intuitive lateral thinking. Really nothing tops us. We’ve just been doing it longer.