Stop Killer Robots Now!

The Artificial Intelligence (A.I.) Blog

A.I. “experts,” crawling out of the woodwork, seem intent on driving us mad with silly uninformed talk about A.I. Most of these so-called experts know little to nothing about A.I., being unable to properly explain anything about it.

Computers are essentially fast iterative and switching devices. They iterate or repeat a program — a set of machine instructions — and provide a result, or output. Which has nothing to do with “intelligence.” At times, its programming may give the appearance of intelligence, which is basically a trick.

So — somebody had to do this article. A long article, but certainly you don’t have to read it all at once… unless you can’t resist, of course. (Author rubs hands in glee at the thought.)

It doesn’t help that some people must want to be tricked. We see it with other things, also, the current “cryptocurrency” frenzy coming to mind.

So now, “Artificial Intelligence.”

Quick & Dirty Summary

  • A.I. as a tool of deception
  • Non-robotics A.I.
  • A.I. and robotics
  • Primitive and advanced robotics
  • State of the art, expanding the state of the art
  • Recent news and hype
  • Conclusions

“Artificial Intelligence” is the same as “run-proof nylon stockings,” or “shockproof watches.” You aren’t supposed to believe it. A better term is, obviously, Simulated Intelligence. But that is a more honest term, so we can’t have that. And still too kind a term, because there’s no “intelligence” in the machine that simulations are implemented on, no matter what kind of fancy dance we perform in the software.

To trigger the hormones for a cheap thrill, we might watch a “scary movie.” The same cheap thrill seems to be happening with these wild imaginings and freak-outs about “intelligent machines,” which means, inevitably, we’ll start thinking about robots acting all book smart, swaggering around, and pushing us off the sidewalk and things.

Remember, we can model mathematical equations on the computer, to be performed millions of times faster than we can manually. That is computers’ strength, not in “thinking.”

It’s shameful to cow people to make them assume a computer can think, and to actually fear the machine!

“Artificial Intelligence,” as a Tool of Deceivers

Deceivers would love to have an “all-wise, all-knowing” smart aleck “A.I.,” to tell us all what to do. It’s a ruse to create an “Electric Einstein” that will dictate to the unwashed masses what they are to think, believe, and act on.

Those deceivers are tripping all over themselves in their rush to concoct the narrative — but is it “A.I.,” or is it “robots” for people to go gaga over?


If someone must brag, they should brag when they’ve completed something, not before. Nevertheless, blustering and wide-eyed boy-scientists, “experts” and “futurists” foster and perpetuate myths:

Levandowski says that he envisions creating an artificially intelligent being that will literally be “a billion times smarter than the smartest human”…

What is going to be created will effectively be a god,” he said. “It’s not a god in the sense that it makes lightning or causes hurricanes. But if there is something a billion times smarter than the smartest human, what else are you going to call it?”

That “billion times smarterer…” bit ought to give pause. It couldn’t have been a reasoned calculation, based on facts, could it?

Frenzies based on bloated, pompous claims seem to forever plague our existence on this planet. The wishful thinking in that quote is concocted out of both sad self-deception and sorry self-promotion.

The mindset framing this type of thing is that intelligence is “emergent.” That is to say, it just “appears out of nowhere.” That if you add enough processors and memory, suddenly your “artificially intelligent god-being” appears spontaneously, by magic.

Ah, grand dreams, extracted from the routine and mundane operations of computer programming.

The fruitful quest for machine intelligence has to be based in good-will explorations: (1) into what we can make machines do within the bounds of reality; and, (2) into our own, human behavior. Which is to say, seeking insights, not confirmation of unfounded biases.



What if I…?

What about…?

But if I…

If even a chess program can seem like a malevolent opponent, that is on us, not the computer. There’s a word, anthropomorphism, for just that sort of thing.

It’s not sensible, but, unable to help ourselves, we may attribute a “personality” to even that winning chess program. Doesn’t make it so, of course. Chess master Spassky attributed an unexpected move his computer opponent made to “superior intelligence.” Meantime, the programmers found it was due to a programming bug, ha-ha.

And if someone programs and engineers an architecture that superficially “looks alive,” again, that’s on us, and shows there are things we don’t know about the engineering and programming, not that the mechanical beast has graduated to the world of the living.

Frauds Making Stuff Up

We don’t think of a computer as a “magic” box… diminishing the accomplishment. We’re too accustomed to computers to feel any sense of awe, though they are a magnificent achievement. They’re too familiar. But they aren’t generally well understood. And where there is a lack of deeper understanding or familiarity, technologies are, as ever, exploited.

With this dopey “A.I.” farce, they’re lying through their teeth.

Here’s one of their proofs by assertion, trotted out as “evidence” that computers are creating their own languages. It represents true insanity at its finest.

Yes, it is every bit as stupid as it looks.

For the pathetic “rationale,” this July 14, 2017, article by Mark Wilson on

A.I. Is Inventing Languages Humans Can’t Understand. Should We Stop It?

So they (computers) began to diverge, eventually rearranging legible words into seemingly nonsensical sentences.

“Agents will drift off understandable language and invent codewords for themselves,” says Batra, speaking to a now-predictable phenomenon that’s been observed again, and again, and again. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”


Yeah, people always go into the copy center and hand them a document.

“The the the the the the the the the the.”

“Oh, so you want 10 copies, then, Weirdo?”


A proper programmer — unlike “scientist” Batra there — would interpret the output in the screen shot for what it is: The result of bad coding. It’s amazing, the delusion of these “computer researchers,” so twisted that they think that anything they produce is as though from the hand of God, a miracle. It can’t be a bunch of gobbledygook, it must be that their awesomeness has touched the machine with the spark of life and spirit, and now, it speaks, so cleverly that, they, the Gods themselves, cannot decipher it! They’re so smart, wondrous and esoteric, they’ve managed to outsmart themselves! (But deliberately of course.)

To be clear, for the umpteenth time: A computer only can output the results of the instructions that a person puts into it. It is a machine. It cannot think, reason or be conscious, much like the writers for

It’s a given that, as always, they want to gauge the gullibility of the public with this stuff. But they’re also setting up to exploit something like that old WarGames movie scenario, where the computer “takes over,” so they can blame any stunts they want to pull, like economic failure, a war, or stealing from your bank account, on “rogue computer intelligence taking over.”

As well, this BS is probably to garner hype for an eventual product coming down the pike, as they sold washing machines with “fuzzy logic” for a while. Remember, they are experts — at applying their Bee-Essing to anything that can scrounge up a buck for them. Perhaps a new movie with Morgan Freeman as Superman, Ben Afflict as God, and the great Steryl Creep as his hard-nosed, hard-edge boss: “Brave Heroes Bravely Battling the Killing A.I. Killer Robot that Kills Mankind.”

As well, this nonsense is a very important tool to sell the public on the infallibility of machine logic and reasoning, so they can make proclamations, similar to the way they set up Einstein and Hawking as infallible sages.

This, “man behind the curtain” nonsense is used all the time.

So, of course, that’s on their checklist: Create a science-provided “all-knowing” entity to lord it over the people with proclamations and exhortations, and herd the gullible masses in an “approved” direction.

Self-contradictory, to be sure. Is A.I. savior or killer?

Yes, misrepresenting “A.I.” is another attempt to tap people’s gullibility and exploit misunderstanding of the concepts of computers, intelligence, and machine capabilities. With clever programming, computers are made to seem more than they are. Don’t start thinking that your computer is going to sprout legs and start attacking you or something. And don’t forget, this “new” A.I. scam, has been used to scare people in the past, like in 2001: A Space Odyssey, The Forbin Project, War Games, and of course, The Terminator, and I, Robot. Almost forgot: Alien, had a sneaky robot, Ash, not to mention Westworld… This tactic, is similar to the scare-tactic of “evil aliens,” as in Alien, the old movie Earth Versus the Flying Saucers, Predator, The Thing, The Invaders

Selling everything as worse than it actually turns out to be is a deliberate way to cripple thought, to frighten almost everyone into exhaustion and dependency.

Classifying A.I.: The Wrong Way

In “scientific” efforts to create “thinking machines,” the folly is, they have to assume consciousness is a physical attribute, when there is no justification for that assumption.

One list of the popular classifications of A.I., is from an article by Arend Hintze, a Science and Engineering professor at Michigan State, on the site, The Conversation.

1. Reactive machines, like chess-playing

2. Limited memory, machines that can “look into the past,” as used in self-driving cars

3. Theory of mind, machines that can form representations about the world, and also about other agents or entities in the world

4. Self-awareness: “The final step of A.I. development is to build systems that can form representations about themselves.”

As a classification, it is somewhat lacking. Of his four items, as Hintze admits, the last two items are pure fiction. Fodder for funding perhaps, but fantasy. The second item refers to machine memory, “looking into the past,” when there is no such thing, limited or otherwise, not in the biological sense. They’re still just working with plain old computer memory and computations based on a computerized model of the road and other vehicles — no different from game programming.

It’s like he were setting out to design a taxonomy for classifying mankind, and came up with, “In the first category, there’s your guys who can do some things. Second, there’s your guys who can do some other things. Then you’ve got your guys like Batman, the Hulk, you know, your super-powered guys. And your mutants, like the X-Men. Fourth, finally, you’ve got your wizards, your hobbits, your sorcerers, other magic-type guys.”

Classifying A.I.: A Better Approach

Here’s a decent real-world summary of A.I.:

  • data analysis — using networks of computers programmed with statistical rules to analyze data networks and uncover patterns and therefore relationships
  • modeling — trying to duplicate or simulate various human functional systems, like vision, which merges computing with the mechanical, and is a part of robotics
  • intelligent gaming — using the computer to do games or simulations, and generally, to simulate a human opponent in some way — as in a chess-playing program
  • simulated intelligence — using the machine as a human stand-in
  • expert systems — using computer-based inference rules applied to stored facts and rules in a knowledge base. This can be used to calculate “answers,” based on prepared questions and assumed facts. These can be implemented with things like Bayesian network programs, that weigh probabilities and reveal causal relationships, and neural networks, that simulate “brain neurons” to create a pattern recognition system, useful for spotting subtle changes.
  • primitive robotics — (repetitive tasks) — robots like those welding cars in an auto plant, that perform a repetitive task under controlled conditions
  • advanced robotics — (Actionist, Subsumption (trainable robots)) — The user can use simple techniques to “teach” these devices a (simple and restricted) new task, without the requirement of being a programmer

 Practical Developments in A.I.

There is good research going on, producing solid results, but with limited use. The problem arises when the over-anxious want too much, too soon.

There’s nothing, now or down the pike, that will approach truly “intelligent” behavior. There is continuous progress in making more useful machines, although they don’t “think.” On the down side, there are lots of tricks to give the appearance of thought, and that may muddle our interpretations when it comes to identifying true breakthroughs and advances.

You don’t “set out” to “create” or augment intelligence, you research areas of interest, and experiment, which may or may not lead to “breakthroughs” in the field.

Even if there aren’t any mad breakthroughs, proper study will certainly increase the knowledge base in that particular area.

 Review of A.I. Disciplines

Data Analysis

What does it mean, “to use networks of computers programmed with statistical rules to analyze data networks and uncover patterns/relationships?” It means nothing more than intelligence gathering, i.e.: spying. You’ve heard the intelligence agency term, “chatter,” to describe a general flow of data/information, and this is just a way of analyzing that “chatter,” that otherwise would be pretty much a random mix of data transfers, conversations, e-mails, and such.

A simple example that perhaps doesn’t appreciate the subtleties, but is good enough, is: Joe’s Car Wash puts an ad on TV and then Joe monitors Google Analytics to see if it brings any more searches and “hits” for Joe’s website.

Of course a full-blown analysis of this nature is done with computer programs for data gathering and statistical analysis, data sniffers, packet sniffers, worms and virus programs, and probably a bunch of other stuff with a bunch of other fancy names.


This looks to model comparable human capabilities/systems, such as, for example, human visual perception.

This is research into duplicating biological functions. For example, setting up cameras with computers to try to simulate what goes on in vision. It involves the integration of mechanical and electronic systems.

But it isn’t really a duplication, per se, but merely an attempt to isolate qualities and characteristics of the senses to make them useful in robotics. For example, to make a device that identifies and responds to a particular person by sight. Research in this topic is geared towards engineering solutions rather than to gain insight into human biology.

Intelligent Gaming


Things like chess-playing computer programs, that use brute force methods, with some refinements, may appear intelligent, but, notably, only perform a repetitive task. For example, a chess engine will be programmed to examine all possible moves, and to pick the best one based on a weighting system. But that method isn’t flexible — it can’t be used to play another game like blackjack or Go, for example.

There is no “machine intelligence” involved, but there is a lot of skilled programming involved in creating these chess games. In a way, it’s a “trick.” But it is still a powerful method, as anyone who has ever played chess against a good program knows, and one that shouldn’t be discounted. Modeling a chess game and assessing chess positions is a bit of an art form as well.

It’s not hard to understand the process, just a lot of work and tricky to implement.

There are two main components to all chess programs — evaluation of the quality, or desirability, of the current position of the pieces and the move simulator.

Chess programs algorithmically/heuristically apply a method, and check for its efficacy, picking the best move after selecting from hundreds or thousands of moves. It’s like drawing the current position of the chessboard, then testing, move-by-move, each possible move you can make, and each counter-move your opponent can make.

You could play chess this way yourself, with a pad of paper and pencil; however, it would not be practical, since the computer can perform these calculations seven or more orders of magnitude (millions of times) faster.

But in fact, in creating a chess-playing program, that is what programmers are doing: figuring out how they would play chess if they had to write it out by hand on a pad of paper. To try it ourselves, we would start the process by writing down, “Pawn: Value: 1, Position: A2,” for example, on the pad, and repeating for every other piece on the board, according to name, assigned value, and position.

Then we would “test” or “experiment,” by moving a piece to create a new configuration, then “recalculating” the board, making an assessment of how good that position is, based on the possible moves the opponent can now make.

So, the computer program assigns a numeric value to each of the possible positions it tests — a piece attacking the center squares of the board, is in a more “valuable” position, for example. These calculations cascade, as move and counter-move, by the opponent, are checked. Then the subsequent move, and that one’s counter, and so on. After computing to the predetermined number of moves, or “depth,” the computer will simply select the move that leads to the highest accumulated score, and therefore the likely best possible outcome. The depth, or number of moves the computer “looks ahead,” will be set dependent on the expected speed of the machine and efficiency of the program.

Note that the computer program does face a geometric rise in the number of positions it has to evaluate, the deeper its checking algorithm runs, but improvements in processing speed and program efficiency have made this point moot.

Via long-term effort, this method of programming for chess games has proven highly effective, and the computer can seem a bedeviling, malicious opponent. In fact, even twenty years ago, the IBM “Deep Blue” program was beating the chess grand masters. And you can get programs for your laptop now that whip “Deep Blue.” At this stage, the best chess programs far outstrip the best-ever human players.

The better games have used the input of grand masters who assess whether a given configuration of pieces is more favorable than another.

Note that the computer can’t make that assessment on its own, like a living chess player can. The programmer has to tell it how good a given position is, based on empirical rules. Unlike a human player, the computer has literally no knowledge of the game, but is merely performing arithmetic operations in choosing a “best move.”

The computer can do all this tedious business without complaint, and very rapidly.


There are of course many other games and simulations run on computer. Again, though, it is all modeling and mathematical manipulation, that can sometimes seem “intelligent,” while relying on the usual computing principles and techniques.

“Self-driving cars,” starting to become newsworthy, will use many of the techniques of gaming, combined with lessons from robotics.

“Simulated Intelligence”

This type of program is used to mimic humans.


The first well-known example of a simulated intelligence is ELIZA, really a “parlor trick,” which simulated a human psychiatrist by feeding back canned dialog in response to humans in a typed “conversation.”


Nothing new here, we’ve all had the displeasure of interacting with automated telephone support systems, that attempt to simulate a human operator.


An advanced simulation is IBM’s “Watson,” playing as a participant — and winning — on the quiz show, Jeopardy.

This is called a sophisticated and fast dedicated question answering (QA) computer.

QA is a computer science discipline dealing with information retrieval and natural language processing. In Watson, it parses sentences, using multiple techniques to analyze them, and returns statistically-related phrases. It uses massive parallel processing of multiple language-analysis algorithms to generate its results, rating them on how many of the algorithms returned the same result.

Watson is a great achievement for a specific task, but it doesn’t “think,” of course. It applies algorithms (strategies), translated into computer code. The strategies employed are special rules for language analysis.


But here’s an odd item: We can’t discount that the Jeopardy! tournament was a sham, a fraud.

Forbes spilled something when they told us that Watson hasn’t progressed much from that 2011 triumph, in the 2017 article, Is IBM Watson a ‘Joke?’

Watson customers must hire teams of expert consultants to prepare the data sets, a time-consuming and extraordinarily expensive process.

Whoa, whoa, whoa, there… How does that jibe with the representation of Watson as a autonomous player in a panel of Jeopardy! champions? How was Watson able to “play” the game, then, if it wasn’t specifically configured for Jeopardy!? That is, if it wasn’t specifically programmed with the data set of the Jeopardy! questions/answers? And if it was programmed with the answers, that’s called cheating, in anyone’s book.


Watson wasn’t named after Sherlock Holmes’ associate, Dr. Watson, but the first CEO of IBM. But it nonetheless seems fictional.

If they come out and admit, in print, that it doesn’t do what it is supposed to, there’s no alternative.

Also suspicious: the Watson represented on the quiz show would have been an ideal candidate for the basis of a new Internet search engine. With its speed, insight, and ability to draw conclusions, it would be a strong contender to challenge Google for dominance!

Yet there are no meaningful applications for Watson since that 2011 contest.

Watson would be puzzling. That is, if we couldn’t draw conclusions at face value: the Watson spectacle was a “marketing event.” IBM sunk a lot of costs into researching an “A.I.” that never amounted to much, so to recover its losses, IBM decided to push it on gullible businesses, exploiting Jeopardy! for widespread publicity.

…IBM fell into the trap of over-promising and under-delivering. “IBM claimed in 2013 that ‘a new era of computing has emerged’ and gave Forbes the impression that Watson ‘now tackles clinical trials’ and would be in use with patients in just a matter of months…”

…the current version of Watson should be orders of magnitude more advanced than it is now.

Well, what do you know? It is a joke.

Expert Systems

These use computer-based inference rules on gathered facts in a knowledge base to answer posed questions. These can be implemented with things like Bayesian networks, that weigh probabilities and reveal causal relationships, and neural networks.


What the Bayesian network amounts to is a graphical model of probabilistic relationships among multiple variables of interest. Various weighted probabilities are fed to the machine, to investigate causal relationships with many variables, using this system.

In plain words, it is good, for example, for things like medical diagnoses, or determining the influence of some policy and whether it is meeting its goals. Bayesian analysis is dependent upon gathered information, but it is also somewhat resistant to “bad data” and gaps in data.

Some of the math behind it starts with simple statistics:

For any two events, A and B,

p(B|A) = p(A|B) x p(B) / p(A)

…where “p(A)” is “the probability of A,” and “p(A|B)” is “the probability of A, given that B has occurred”.

An example:

Suppose it rains in Spain 20% of the time and that it is cloudy 40% of the time (sometimes it is cloudy without rain, but if it is raining, it is cloudy 100% of the time). The chances of rain(R), given that it is just cloudy(C), are calculated as follows: p(R|C) = p(C|R) x p(R) / p(C) = 1.0 x 0.2 / 0.4 = 0.5 (50%) chance of rain if it is cloudy.

An equation for three variables:

p(A,B,C) = p(A|B,C) x p(B,C)

…for any three events, where “p(A,B,C)” is “the probability of A, B and C,” and “p(A|B,C)” is “the probability of A, given B and C have occurred”

There is an equation that allows us to calculate probabilities for arbitrary numbers of events,

p(A1,A2… An) = p(A1)p(A2|A1)… p(An|A1,A2… An-1)

With more events, these probabilities add up and get complicated — picture this equation with a hundred events (n=100) — but fortunately the computer is ideal for performing computations. Again, you could do the calculations manually, but it would be unwieldy and the sheer slowness of manual operations wouldn’t make for an efficient predictive network. In much the same way as you could walk across the country, but it’d be much more sensible to drive.

This method allows calculation of all sorts of things — like, if the grass is wet, is it because it was raining or because the sprinkler was on?

More practical applications are for medical diagnoses, diagnosing mechanical systems and factories, predicting sports matches, in computer science, and others.

It’s all somewhat dependent on the structure of your net, of course, the elements that go into making it. For example, the grass may be wet because of the dew, or because the neighbor’s dog wandered by, “marking his territory,” so you can instantly see the limitations of the Bayesian method. It may bite you if you failed to anticipate (and program into the computer), every possibility before drawing conclusions.

Once again, this method is an exploit of what the computer is for, repetitive use of formulas and calculations. It also emphasizes that the computer is not for “making better brains,” but for doing the things brains can’t do, at a practical rate of speed or efficiency.


Deep Learning, is a discipline that studies the construction of machine learning models that learn a hierarchical representation of data.

Neural networks, are a class of machine learning algorithms. So, neural networks are one practical realization of Deep Learning, in computer hardware and software.

(By the way, were you aware that Al Gore thinks he invented algorithms, because they’re named after him?)

Now, a big question is: What are the differences between a neural network computer program, and a Bayesian network computer program?

The difference is that the two perform differently, and solve different problems. Bayesian networks deal with statistics, assigned probabilities, and causal relationships, arranged as networks. Neural networks simulate a theoretical model of how the human brain might operate. “Nodes,” computer data structures, meant to model brain neurons, are designed to accept multiple “inputs,” and provide a single “output.”

A Bayesian network is a “generative probabilistic model of the relationship between multiple random variables. In generating all values for a situation/phenomenon, it forms a joint probability distribution between multiple random variables. It typically requires some priors or assumptions about the structure of the joint distribution.”

Note, that while it could be used for classification, it normally isn’t.

“Deep learning is typically used for classification: supervised learning. It is a discriminative model. It does not try to model/estimate the joint probability distribution between multiple random variables. It typically does not require you to specify priors or assumptions about the structure of the joint distribution.”

From Wiki:

In probability and statistics, a generative model is a model for generating all values for a phenomenon, both those that can be observed in the world and “target” variables that can only be computed from those observed.

By contrast, discriminative/conditional models provide a model only for the target variable(s), generating them by analyzing the observed variables. In simple terms, discriminative models infer outputs based on inputs, while generative models generate both inputs and outputs, typically given some hidden parameters.

Generative models are used in machine learning for either modeling data directly (i.e., modeling observations drawn from a probability density function), or as an intermediate step to forming a conditional probability density function.

Generative models are typically probabilistic, specifying a joint probability distribution over observation and target (label) values. A conditional distribution can be formed from a generative model through Bayes’ rule.

But there’s no need to get bogged down in all that fancy talk. The big difference here is partly in the math — these neural networks act like big mathematical matrices, from matrix/linear algebra — while Bayesian networks work with probabilities, to provide statistical guidance as to the mathematical odds of something of interest.

It’s as though the two are related, but mutually inversed, in some ways.

Neural networks are things that can, say, be arranged to determine whether a number on a display is a 1, 2, 3, and so on, by selecting the most probable match.

Banks use them, for noting small changes in number of transactions or peoples’ bank balances, which may indicate customer dissatisfaction.

Here’s an example of the math used, to produce the results of “neural networks.”

If there’s any “magic,” it is in the math, you might say, and in doing the work to translate it to computer code.

\begin{eqnarray} a^{l}_j = \sigma\left( \sum_k w^{l}_{jk} a^{l-1}_k + b^l_j \right)\end{eqnarray}

The meat of it is manipulation of groups of numbers, arrays, (something similar to SPREADSHEETS of number groups), under certain mathematical rules.


Public Perception: Fake Robots

The way the words, “robotics,” “A.I.,” and other specialty, technical terms are thrown around, without any breakdown of what is being talked about is not instructive. Besides, no insight is provided, for the most part, into any supposed “breakthroughs” that would enable the type of world that “visionary futurists” jabber about.

People spread fear about robotics taking jobs, like in factories, taking over for truck drivers, and even, eventually, substituting for doctors, dentists, and other specialties.

Well, as far as getting rid of truckers, why not? It’s a horrible, stressful job. To be clear, we should keep the truckers, but eliminate the stressful aspects of the job! Someone will still be needed, at least for the start and end legs of journeys, if only to load and unload, and keep the paperwork together, plus guard and monitor, be the truck “auto-truck” or manual.

It’s similar with cars: People doing the driving should and can continue, if that is desired, but the option will make driving seem progressively more of a chore, and car autonomy will eventually be recognized as a boon. As long as it’s done right.


Perhaps the Saudis aren’t this gullible and this was just a publicity stunt, but supposedly a “robot” was “granted citizenship” in Saudi Arabia.

There are those who buy in to the illusion. What a great illustration of the saying, “The mind has no firewall.” Do some people actually believe, because someone comes out and says it, that something like a mannequin can be “given citizenship,” and that makes it “alive,” or at least, “approaching alive?” Giving citizenship to a Barbie doll somehow confers authenticity.

Because the device simulates facial expressions while talking, people think they “see into her soul.” This misplaced sentimentality and empathy, points to something scary in people’s psyches, in their longing for this sort of surrogate. You have to wonder how long it will take until they have to “discipline” their new robotic best friends, or, “educate them in the ways of love.”

To understand how this “robotic citizen” works, it’s not any different from the animatronic Abe Lincoln at Disneyland. And with all the people that have been to Disneyland, how is anyone fooled by this con? All of the robots “expressions,” that seem to befuddle and even move people, are all calculated. The robot isn’t actually feeling the emotion of happiness when it smiles, For Bugger’s Sake! Which is to say, it isn’t actually smiling.

The robot is programmed, which is to say, “running a script,” to make certain “expressions,” under certain conditions.

To actually make robots at the level of say, “The Robot,” from Lost in Space, or even “R2-D2” from Star Wars, would put us at the level of gods, it’s that difficult a task, and cannot be accomplished via the physical means that are employed now. That is to say, we need to know more about ourselves, the metaphysical and the universe, in order to just start to strive for or even countenance autonomous robots, let alone “Mr. Data,” “Bishop” from Aliens, “T-800” from The Terminator, and so on.

Something noticeable in the popular literature is that writers can’t write on these issues of A.I.. Partly because it’s painfully obvious they don’t have the background, and partly because they are consumed by the sensational aspects, necessary to get “eyeballs” on their story.

“It might sound easy to get machines to learn, adopt and retain our goals, but these are all very tough problems. First of all, if you take a self-driving taxi and tell it in the future to take you to the airport as fast as possible and then you get there covered in vomit and chased by helicopters and you say, “No, no, no! That’s not what I wanted!” and it replies, “That is exactly what you asked for,” then you’ve appreciated how hard it is to get a machine to understand your goals, your actual goals.”

This is a whole different level of misunderstanding, on Max Tegmark’s part, that ignores that a human can be literalistic. If you had a automatic taxi that behaved in this manner, believe it or not, it would be intelligent. Not that this exchange could happen, but look how the self-driving taxi resolved the poorly specified complaint (the passenger only exclaimed, “No, no no! That’s not what I wanted!”), by first understanding that the passenger was complaining about its driving. It then acted in context by correctly contradicting the passenger with a pointed answer. It performed as requested, then performed quality reasoning in its own defense — something far beyond anything we have or contemplate today.

It would mean a change to everything. It’s amazing that people can blithely smear the realms of reality and fantasy so readily.

In a Blade Runner 2049 review, the critic mentioned something about the intelligence of the “Replicants.” If there were something like Replicants, we would have a dilemma, because they would literally be a form of life, unlike anything we could or should expect. They would be live beings, and we’d have to revise all of science and philosophy.

 Real Robotics

Drawing a distinction between computer programs and robots, robots are, of course, mechanical devices guided and controlled by electronic systems: logic units or computers. The discipline should be divided between primitive robots, associated with “grunt work,” and the more modern devices, which seek to emulate human abilities like learning and reasoning.

Primitive Robotics

It’s a hazy distinction: “When does a mechanical device become a robot?” An automatic welding rig, for example, is it a robot? Things like automated milling machines?

It’s safe to say, if it’s an automatic device performing a constant, repetitive task with simple constraints and no provision for “learning” new abilities, it’s a primitive robot. It differs from a machine because of its computer controls, of course.

The main provision seems to be that primitive robots are tools enhanced with controlled positioning and activation. They utilize solid mechanical engineering and control systems, but are familiar, not the topic of great fanfare.

That doesn’t mean there can’t be great progress in the field, and simple automation of more and more devices. It’s progress, but mostly unheralded progress.

Actionist Robotics

The actionist approach to robotics, is grounded in actions and behaviors. It seeks to build robots that exhibit complex behaviors, and do so by gradually modifying and refining their actions via sensory-motor links.

Advanced robots — computers plus active mechanical systems — are certainly closer to creating something that looks intelligent than pure computer-driven techniques. Perhaps one day, something that fools almost everyone will come out of research in this field.


With subsumption, we have a novel system designed to sidestep the problems associated with so-called “intelligent” systems that depend on a programmed model of the world, and symbolic manipulation.

The failed symbolic model is probably based on one scientist’s observation that, “No calculation is possible without symbolic representation.” That notion, though, assumes that intelligence IS calculation.

Subsumption successfully tackled important areas of A.I. that conventional approaches had failed at, especially what we might deem, “sensible” activity in real-world settings.

With its hierarchical approach, this method allows one to add behavior “layers” relatively easily, but with each layer, you also layer on the complication, and, if you haven’t selected your hierarchy carefully, or want to change that hierarchy, it can be a pain in the robot butt.

At first, it seemed like subsumption was leading toward — was the only architecture that could lead toward — something approaching intelligence, what with that initial success, but it, too, has its limitations.

Subsumption can be explained easily enough. It is a technique of layering functionality and behaviors. If you create a four-legged robot, as an exercise, the first level of subsumption might be to position all four legs in a “neutral” position. The second, to stand and balance itself upright.

Here’s a great illustration of that. It’s obvious that Boston Dynamics is using subsumption-style programming — it helps that B.D. originated at MIT, where Rodney Brooks invented subsumption.

Note how the “dog” reacts to the kick by scrambling to maintain a balanced, upright position, and refines its steps to smaller, more precise, ones as it gets itself stabilized.

Nice work has gone into that robot. You can also note it has a saccading vision system integrated where the symbolic “head” is (mimicking the vibratory, “saccading” movement of the human eye that enables vision).

“Simple” balancing was one of the biggest challenges to robotics, and subsumption solved it. Beyond that, note an important something: Your required functionality will tell you what sensors and activators you need to achieve your goals. This is something that subsumption made much more easy, by providing a coherent, sensible method of integrating new abilities.

For the functionality that allows the robot to stand, you need leg position sensors, motors and activators, and a balance sensor, in addition to your CPU, legs, body, power source, and you will need to integrate these via interfaces, to your processor. Again, your system’s behavior is programmed, but so structured, that you’re closer to a simulation of something lifelike.

Next, you’ll want your robot to explore, but that behavior will be subsumed by your collision/collision avoidance behavior. So you need something like whisker or bumper sensors to determine when the unit makes contact with an object. You’ve probably seen a Roomba, a small automated floor and carpet sweeping device — it is a good example of this architecture in action.

Now, you can keep adding useful behaviors on, but all of them must cascade — your balancing system is a lower level (more simple), but higher priority — if your robot falls over, its immediate task is not to explore, but to right itself, for example.

Well, this is a highly successful strategy, it turns out, as we see with Boston Dynamics’ robots, and even with the consumer product, Roomba, that functions successfully in the real world, avoiding havoc, for continuous periods, unless you’re Jesse Pinkman’s Roomba, of course.

In Roomba, you would incorporate a “charger-seeking” behavior, for when your robot gets low on juice. This would be dependent on a mapping behavior/map reading to function, of course.

Well, as you might imagine, the complications here can become endless — deciding on priorities, and how to sensibly implement behaviors within the constraints of the mechanism. Still, as long as you keep your hierarchies in mind, it is possible to produce a successful system — much more satisfactory than with the old methods of trying to “model” behaviors ahead of time, and inevitably failing to anticipate all scenarios your robot might encounter.

Your subsumption robot might even “do the unexpected,” but that’s more because of the limitations of our imagination, than any “intelligence” on the machine’s part. In fact, we want our system to “do the unexpected,” because that is what saves us from the problems of the prior, conventional systems.

Note that just the propensity to remain balanced, gives the robot some lifelike qualities.

When you see a robot walking or articulating like a dog or such, the empathy and curiosity it engenders, gives you the “feels,” but that is of course only a false perception. It’s nothing on the part of the robot. Man has a weakness for anthropomorphism, why should this be any different?

Could a subsumption architecture can be built with a complexity so great that it does exhibit intelligent behavior?

Again, it already is somewhat intelligent in its current state, but still, even with more complexity, would lack certain qualities, abilities, and characteristics.

  • memory
  • intuition (really an aspect of memory)
  • invention
  • emotion (not that emotionality is usually all that desirable, but at times it is)

It would, on the other hand, have:

  • apparent reasoning
  • apparent curiosity
  • apparent industriousness and persistence/stubbornness

So subsumption does seem closest to producing something that simulates smarts — in fact, it really IS smarts, at least one aspect thereof. In fact, if there’s a “growth industry” for A.I., subsumption would be it.

So, if you’re wanting to improve A.I., get in on the ground floor, and figure out ways to improve subsumption. It’s a field ripe for innovation. For one, you can figure out how to make the levels “flexible,” so the bottom levels are interchangeable, when appropriate. Because one of the next advances will be a way to “cycle” the priorities so a base-level behavior can itself be “subsumed,” under certain conditions — or, the modelers will need to find deeper “base” levels.


Those robots that are not full mimics, basically “not deceptive enough,” are thought of as “creepy.” Note that most people wouldn’t categorize the Terminator T-800 robot as “creepy.” Menacing, yes, but not creepy, at least while it still wears its skin, because it is able to perform a full facsimile of human behaviors and emotions.

Some are a little bit creepy,

 State of the Art

As mechanical devices become more advanced, it will help the state of the art of A.I. advance. Swimming robots are certainly beyond our current reach, unless you literally build a type of submarine… which probably wouldn’t be in the spirit of what we were thinking.

But, we are progressing, and what we do have now is the result of a lot of clever engineering — but still nothing like that represented in Science Fiction.

All Science Fiction that features robots, androids, replicants and automatons pulls a big trick. It misrepresents robots, as having all the traits of humans, (though sometimes their emotions are disguised), and almost always presents them as superior to humans in every way.

In the real world, good engineering and creative solutions, like Bayesian implementations on computers, exploit the machine to do the things it can do better than humans — repetitive calculations at high speed. Machines don’t approach, simulate, or even try to do things many of the things that humans can. In fact, the programmers don’t even recognize humans’ strengths.

Memory, for example, is nothing like computer memory, though we foolishly use the same terminology. Researchers have thrown up their hands — they are not even close to an explanation of human memory.

In fact, the inability to “explain” memory is a major stumbling block — and apparently unrecognized problem — of science. They created the basis for their own failure, of course, because they set out with unsupported presuppositions — to wit, that it is like computer memory. That’s their model for investigation. The model isn’t reasonable, and researchers with that notion are not going to succeed.

Instead of setting out to investigate, to ask humbly, “What is this factor that allows recognition, reflex, that characteristic where stimulus triggers an image or thought? And is “memory” one factor, or a whole bunch of abilities?” these researchers blunder in aimlessly and recklessly with preconceptions, assumptions, assertions, and arrogance. Recall, that, when we assume, we make an ass out of u and me.

Just the fact that we have such flawed memories — readily forgetting things (like math equations), forgetting birthdays and anniversaries, even at times forgetting that we’re holding in our hand the very thing we’re searching for — should indicate that the computer memory model is a non-starter.

And then, we sometimes remember obscure things from the past, out of the blue. It shows how different man’s memory is from “computer memory,” which is highly reliable, but also highly simplistic in the abstract sense.

Yes, computer memory is simplistic in function, complex in realization. Human memory is complex in function. Is it simplistic in realization?

“If at first you don’t succeed, Mr. Kidd…” “Try, try again, Mr. Wint.”

Wint and Kidd, from one of the best James Bonds, Diamonds Are Forever, give a good example of a triggered recall when Mr. Kidd supplies the prompted ending to a familiar proverb.

Is familiarity different from other memory?

What about the “tip of the tongue” phenomenon, when you sense something, but can’t verbalize it? Is that a form of memory, or something else. If so, what is the reason behind this imperfectly formed recall?

No, memory is not what we think it is. But for man and computer, it is an essential.

We know that memory, human memory, is flawed, compared to reliable computer memory — or is it? Maybe we’re just using it improperly.

 Ending the Threat

Now, what about those Killer Robots?

If you need to kill robots that are coming after you, one thing you could do is leave banana skins all over the floor.

Incidentally, if you’re wondering about “thinking, killing robots,” at what point does it matter if they think? If they’re sent out to kill you, what difference is there, and how does it matter, if they’re directed — controlled by a geek in a control center in India — or if the device reasons for itself?

On the other hand…

What makes us think, if someone created a billion-brain super-brain, they wouldn’t keep it to themselves to win lotteries and make the next “Microsoft” or “Apple” or “Google” or something?

And, needless to say, if something were that smart, it wouldn’t see us or our values or ambitions in the same way as we do. We certainly wouldn’t be able to tame it to pander to simple, somewhat pathetic and self-serving, demands.

How would we get it to take us seriously?

The danger is not of wild, maniacal, killer robots, then, but of pathos, of them ignoring us, and of we, their creators, being unworthy, beneath them. We’d die of embarrassment.

 Pushing the Hype

Lately, talk about A.I. has been coming hot and heavy. This recent A.I. “thing” — claiming its insurgency and dominance — speak to the new applications for A.I.; that is, the progress being made in using the existing A.I. for things like analyzing behavior patterns, and for finding ways to spy on us, and ply us with more ridiculous ads, promos and propaganda. It’s enough to make Scrotie McBoogerballs retch.

No surprise that Elon Musk dove right in on A.I., and lookie here, how his “apocalyptic vision” parallels Bill Gates’.

Elon Musk Unveils Apocalyptic Vision For The World

Elon Musk Nightmare Looms: Army Seeks “Internet-Of-Battlefield Things” With “Self-Aware” Bot Swarms

Microsoft’s Bill Gates insists AI is a threat

It’s laughable.

If “news” actually worked, and the science and engineering community were making great strides in developing autonomous robots, we would have had news of breakthroughs periodically, for many years, each one describing and explaining another breakthrough leading towards eventual useful “thinking machines,” to serve man.

In the meantime, back in the real world, the face of “Killer A.I.”

Robot Security Guard “Commits Suicide” In Mall Fountain

robot guard

There was a more recent story about another “robot guard,” in San Francisco, patrolling the streets in front of businesses, to discourage the homeless from setting up camp in front of those establishments. They ambushed, upended and defiled it, smearing it with feces and BBQ sauce. From all appearances, it wasn’t a robot at all, but sort of a drone, that merely rolled around harmlessly in a prescribed area. It didn’t seem to have any other capabilities, but, obviously, it would pose a nuisance for loiterers and malingerers. It looks strangely like the “suicidal” robot security guard.

 The Big Question

And so, we come to the fundamental question. We’ve surveyed various techniques, used in computing, called A.I., but that are really computer simulations. With pure computer A.I., we could achieve the same results if we had the time and patience to perform the calculations by hand.

Suppose you did so. Now, if you take all of your written notes accumulated, say, when duplicating a computer’s chess play, or a “neural network,” and lay them out on the table, could they ever be intelligent? No matter how you arranged and rearranged them? No? That’s what I’d hoped you’d say. So why do you think a computer could be intelligent, then?

With regard to robotics, simulating certain human behaviors doesn’t imply intelligence.

Writing down and solving math equations doesn’t have the same visceral appeal somehow, as Killer Robots — so, instead we’re treated to pompously constructed tales of scary mechanical monsters and gooey pulsing manufactured super-brains. No, it wouldn’t have the same appeal to see a scientist holding up a math textbook and saying, “The math texts are taking over… My God! Won’t someone LISTEN? They’re attaining life and taking over the world!

Fact is, there are no “thinking machines,” and why would you think that there are? Edsger Dijkstra made a great observation, saying, “the question of whether machines can think is about as relevant as the question of whether submarines can swim.”


This discussion aimed to show that, at no time does the computer do any “figuring out” of anything on its own, it merely blindly follows the repetitive process it has been set up to do.

The computer is a bunch of electronic circuits that acts like a giant, fast scratch pad that can manipulate numbers and symbols. The “intelligence” is in the people who can cleverly find ways to make this useful in other ways, like brute-force methods of playing games.

It proved to be a lot of work, sharing of ideas, trial and error, and constant improvement, to come up with these new applications in computing.

In investigating Expert Systems, we found that neural nets depend on computers to implement a trick, merely a novel way to arrange data. The system has to be fed the data, (unlike any form of autonomous life), and certainly different from humans. We also looked at Bayesian systems that utilize probabilities, and can be very useful in helping us come to conclusions, with the caveat that man had to program in the (possibly faulty or incomplete) data.

Those who misrepresent A.I., are the “people unclear on the concept,” who are somehow allowed to comment without an inkling of what they are talking about, that exaggerate the abstract capabilities of “A.I.”

We also discovered that the spurious claims of A.I. can be traced to a desire to manipulate individuals or the public, or to merely create publicity to sell a product.

Shills exaggerate the computer’s role and do not appreciate the cleverness of the programmers. However, it’s not the fault of the computer, or programmers, if people get confused about reality.

In some ways, our perceptions depend on our definitions. If you think that winning at chess is “intelligent,” then the machine will seem intelligent to you. Reality, though, does not come down to just anyone’s random beliefs. Instead, proper definitions are required, so we know what we are talking about.

The Turing Test was an attempt in this direction. Basically it proposed that if a computer can fool you into thinking it is a human, say, via anonymous conversation, it’s intelligent.

But, applying this reasoning only succeeded in proving that humans are really, really easy to fool.

Mankind has to progress beyond the bugaboo of standing in awe of nonsense just because it is a little bit complex, attributing it mystical “powers” or, intelligence.

Leave a Reply

Your email address will not be published. Required fields are marked *