Algorithms Are No Better At Telling The Future Than Tarot Cards Or A Crystal Ball

My Recent Posts

According to a new report "An increasing number of businesses are investing in advanced technologies that can help them forecast the future of their workforce and gain a competitive advantage". It's true, almost every day we see more and more bollocks being written by supposedly intelligent people who believe that by using something called 'Big Data', machines can already be relied on to make better decisions than humans, and that soon computers will equal or even surpass us in intelligence.

 

Such people are to be pitied rather than despised, obsessed with 'science' they are simply not intelligent enough to distinguish factual information from the far fetched fantasies of science fiction writers. Having already put a very successful career in Information Technology behind me (I had to retire early due to health problems,) I have always maintained that machines will only be capable of behaving intelligently if we radically redefine what we mean by 'intelligence'.

 

Personally I am quite sure there is a little more to our thought processes than the ability to parse vast amounts of data extremely quickly and filter / match certain keywords. Language is how we communicate not only information but ideas, emotions and stories. And machines have no ability to infer meanings from words. You can feed a million words into a computer, along with definitions. And when you enter that word and ask for a definition, a simple program will display the answer almost instantly, without the machine having the slightest idea what any of it means.

 

Many analysts and business consultants and hi - tech corporations however continue to believe that, with enough data, algorithms embedded in currently fashionable People Analytics (PA) applications can predict all aspects of employee behavior: from productivity, to engagement, to interactions and emotional states. Predictive analytics powered by algorithms are designed to help managers make decisions that favourably impact the bottom line. The global market for this technology is expected to grow from US$3.9 billion in 2016 to US$14.9 billion by 2023.

 

Despite all the usual promises and all the geek mythology, predictive algorithms are as mystical as the oracles and auguries of the ancient world. One of the fatal flaws of predictive algorithms, the one that has made such nonsense of the predictions of climate change soothsayers, is their reliance on "inductive reasoning". This is when we draw conclusions based on our knowledge of a small sample, and assume that those conclusions apply across the board. It is the methodology that predicted the Remain campaign would win Britain's EU referendum and that Hillary Clinton would anihilate Trump in the US Presidential election.

 

Where inductive reasoning falls down is it 'thinks' like a machine. To put it in human terms, a manager might observe that all employees with an MBA are highly motivated. According to inductive reasoning it therefore follows that all workers with an MBA are highly motivated. The conclusion is flawed because it assumes a consistent pattern where there are many unpredictable factors in play.

 

Experience to date informs us the pattern exists, so there can be no reason to suspect it will be broken. In other words, inductive reasoning can only be inductively justified: it works because it has worked before. Therefore, there is no logical reason to consider that the next person our company hires who has an MBA degree will not be highly motivated. That is how machines think. A human manager, in looking for a highly motivated candidate to fill a position would not make assumptions based on the kind of qualification candidates hold, but would frame certain questions in the interview to explore that aspect of a candidate's suitability.

 

And until machines can handle unpredictability we should stop indulging fantasists by talking about Artificial Intelligence and refer more realistically to data processing.

 

 

A Chronicle Of Decay (slam poem)

Comments

Eileen de Bruin Added Feb 14, 2018 - 2:34pm
 
Nobody can be honest, nobody can speak out,
telling the truth as they see it because honesty
is insensitive and opinions may offend.
All the while as we yield to bullies bluster
and bend before the despot's might
and when rivals take what is rightly ours
we meekly ask if we can give them more.

 
xxx Yes, right on.
 
Inductive reasoning is based on a proscribed logic..not a normal human being’s approach. Artificial intelligence is just that. I suppose that we are on a high road to implosion of buffoonery and bluff. It does sell well, though.  Conviction of purpose provides the saleable outcomes, irrespective of truth, hope, charity, faith or desperation, eh?
 
Robert Wendell Added Feb 14, 2018 - 2:46pm
I believe you're right for the near future, Ian. However, although algorithms are not how our brains work, neither are they how the most recent implementations of AI are working. Programming has essentially nothing at all to do with modern AI research. Yes, there is programming involved, but only as tools for enabling and/or simulating non-algorithmic data processing.
 
Machines can now learn. They don't do this by inventing new algorithms, but by methods that crudely imitate the way our brains work. As brain research and neurology in general advance, the adverb "crudely" will become less and less applicable.
 
Even so, I do not deny the human capacity to access the infinite intelligence of nature that structures and essentially resides in human consciousness. Machines may never be able to do this, but if they can't, it could be a practical matter of human limitation rather than a matter of absolute principle. After all, whatever there is in machines we can pretend is intelligence is, fact, merely human intelligence we effectively downloaded to machines. It's outboard human intelligence.
 
Matter is not really matter if you understand modern physics even from an intuitive lay perspective. We are abstract forms with matter and energy flowing through our abstract formal structure. Our matter turns over every seven years or so. Our energy turns over daily. We can say the same about any ecology and indeed, about the entire planet.
 
The flow of matter through this enormous range of formal patterns creates the illusion of concreteness in relatively short space-time frames. The big view sees this as an illusion that appears true only in relatively small space-time domains. The former is the materialistic view. The latter is a spiritual one. I hold that the latter is the far more fundamental and ultimately valid view.
 
My sense is that the difference you see between humans and machines with respect to intelligence is based on the more spiritual perspective, at least in the broader, philosophical sense. We don't have to invoke any specific religious belief systems to talk about spirituality unless we are fundamentalists.
 
Fundamentalists within every religion pretend superficial space-time phenomena are fundamental to their belief systems. Meanwhile, they ignore the deeper spiritual truths embedded in them. These genuinely fundamental spiritual values are too abstract for the average human, so they get attached to the superficial religious markers.
 
They foolishly believe these make them proudly unique, which they do. But we should be anything but proud of that if we're in that category. Then, in their stupid arrogance, they kill each other over the superficial differences, which never really mattered in the first place.
Doug Plumb Added Feb 14, 2018 - 2:53pm
I believe that human intelligence has two basic modes of operation: inductive and deductive reasoning. Inductive reason is applied to observations to create "laws" which are really maxims of understanding because the world really and truly does not exist as we see it - in a scientific sense. Deductive gives us math and moral reasoning.
  Computers cannot do math and they cannot do moral reasoning. Even if a computer could think to do an inductive proof it wouldn't know that it's a proof. It can only be a successful experiment.
  Big data and big data analysis has its uses. But it will be oversold. Its only as good as its programmer and its user.
 re "Despite all the usual promises and all the geek mythology, predictive algorithms are as mystical as the oracles and auguries of the ancient world. "
  I watched a youtube vid from MIT I think on AI. They explained the algorithms - quite simple. I understood their explanation for how they programmed symbolic integration. Its easier than you think if you haven't seen this.
  The other problems you mention are problems that can for the most part be easily recognized and easily fixed.
  "They" (the communist bureaucracy) doesn't want to "fix" the AGW predictors - they still won't work and the gravy train is still running. I think they have been "unfixing" them so they do work or at least so they can pretend they work.
  I started learning linear algebra, its very interesting and powerful once you get beyond what engineers were typically taught. In some ways it is as magic as calculus.
Ian Thorpe Added Feb 14, 2018 - 4:02pm
Eileen, thanks. I was on fine form when I bashed those lines out on my keyboard.
I think the danger of AI lies in the potential for tech - hype to convince younger generations that they way computers think is a better way than the human brain does. This is why I believe people should get in touch with nature.
Doug Plumb Added Feb 14, 2018 - 4:11pm
The whole thing scares me.
Robert Wendell Added Feb 14, 2018 - 4:12pm
"...proscribed logic..not a normal human being’s approach. Artificial intelligence is just that." - Eileen
 
As I tried to explain in my previous post, this is simply not so. Image recognition, other types of recognition, etc. are not based on "proscribed logic". They're based on neural networks learning by looking at zillions of images. Together with feedback on their degree of success, they learn to recognize images increasingly well as the neural network modifies itself as it gains experience.
 
That's what I meant when I said programming is irrelevant as far as the fundamental methodology is concerned. The bottom-line approach to AI is to reverse engineer the brain; NOT to program a system to do a specific task.
 
Ian Thorpe Added Feb 14, 2018 - 4:18pm
Robert, a well thought out response but I think you are giving too much credibility to the silicon valley propaganda. The case is computers still work they way the LEO2 worked when I loaded my first program from punched paper tape in 1968. Logic gates still exploit the same natural phenomenon as when the first one was demonstrated in, I think it was 1870.
 
The problem with machines is they can only do as instructed. Thus they apply linear projections to random, flexible, non linear systems. Logic gates, arrays of which do all the business in a computer, are set to detect whether certain conditions are "true". They do this by detecting three states, high (+), low ( - ) and the third which most people overlook, nul - the state of nothingness. Nul is kept free of any charge by being clamped to earth and is necessary so that the gate has something against which to compare the other two states. The processor then shifts data or makes decisions by testing if a number of conditions are true. These are:

NOT, AND, NAND, OR, NOR, EX-OR and EX-NOR.

Avoiding technical jargon if we assume A and B are inputs and O the output the symbol <> (greater than or less than) means not equal to,, n is used to indicate a binary digit. O=true indicates the logic gate should let the data bit pass through. Following is a very simple explanation of the logic:
(NOT gate) If A=n NOT true then O true<BR>
(AND gate) If A=n true AND B=n true then O=true<BR>
(NAND gate) If A<>n NOT true AND B=n NOT true then O true<BR>
(OR gate) If either A=n true OR B=n true then O true<BR>
(NOR gate) If A<>n true OR B<>n true then O=true<BR>
(Ex OR gate) If A=n true OR B=n true then O true but if A and B = n then O not true<P>
</div>
Those tests are performed by setting switches, long ago by physically setting rocker (bootstrap) switches, now they are set in software.  The rest is programming.

So, armed with that knowledge can you really foresee a computer being able to decide whether to have chocolate cake or apple pie without a human to think for it?
 
This goes some way towards explaining why, when I go to the Kindle store to buy a new book, Amazon's much vaunted A I often tells me the book I most want to read next is one I already read, which was bought from the Kindle store.
 
When we've encountered logic like that a few times it should help us understand human intelligence is of a far higher order.
 
Ian Thorpe Added Feb 14, 2018 - 4:25pm
Doug, thanks for mentioning that computers cannot do mathematics. I think people have forgotten amid all the hype that "binary arithmetic" is only how human programmers translate the moving of static charges ( + positive and - negative) into different locations in memory to something the human mind can make sense of. The computer, having no awareness, does not need to make any sense of anything, it just does as the program commands.
Prof Claudewell Thomas Added Feb 14, 2018 - 4:25pm
Ian,
Is the introduction of the various stages of the autonomous car illustrative of  the resulting alteration of the entire social fabric well beyond the reach of task specific algorithms?
George N Romey Added Feb 14, 2018 - 4:29pm
There is a faction of society truly all hell bent or replacing the human mind and ability. But what to do with the excess humans? Their answer is a massive form of welfare known as Basic Income. I have no desire to live in that world.
 
Companies are we obsessed on data driven only decision making even as it proves just as fallible as human thought.
Dino Manalis Added Feb 14, 2018 - 4:33pm
Algorithms help, but nothing's perfect.  Instincts are sometimes important in making decisions.
Ian Thorpe Added Feb 14, 2018 - 4:35pm
Prof. I can't answer that without going into great length on certain philosophical concepts. I have read, and find it credible, that the long term intention with autonomous cars is that nobody will own a car in the future, we will simply summon one from a public fleet, tell it where we want it to go and sit back. I don't think the implied loss of personal freedom has sunk in for a lot of people yet. For example, I often get in my car and set off without having a clue where I will end up.
It would be useful if Burghal showed up in the thread to expound on freedom and control.
Dave Volek Added Feb 14, 2018 - 4:39pm
I used to scoff at those scientists and engineers building autonomous cars for the past four decades. I did not like taxpayer dollars going to such research as I believed that driving an automobile was just too darn complex to be done by computers and software.
 
Turns out I was wrong. In ten years, there will probably be just as many driverless cars as driverful cars. In another decade, human drivers will be banned from many roads and streets--just like horses in the 1920s.
 
We should be careful when predicting the limits of AI.
Tamara Wilhite Added Feb 14, 2018 - 6:36pm
AI will be biased based on the biases of their programmers, since the AI will "learn" based on the data sets selected by the programmers.
The AI will then be assumed to be unbiased and neutral when in reality it wouldn't be allowed to do anything unless its creators agreed with it.
Robert Wendell Added Feb 14, 2018 - 6:57pm
Ian, you say, "The case is computers still work they way the LEO2 worked when I loaded my first program from punched paper tape in 1968."
 
True, but why are you conflating how our current computers work with AI? They are NOT at all the same thing. Even just reading your article I got a whiff of this idea you seem to have that confuses them for each other. Do you know what neural networks are? They're networks of devices that crudely emulate neurons.
 
They're usually simulated with conventional computers instead of using physical devices, but that's changing fast. Now they're starting to use physical components that emulate neurons, still crudely, but no longer simulated. They're much faster when you don't have to use all those digital processes to simulate them. They are not themselves anything you can call digital devices and still be fully accurate.
Jeff Jackson Added Feb 14, 2018 - 7:44pm
Nice article, Ian. Um, remember when Knight Capital most $500 million in just a few seconds? They had the stones to ask for it back! The SEC said, sorry, if your computers lost your money, it's YOUR problem.
Neural networks are progressing, and AI is coming of age. By the way, one of the pioneers, Marvin Minsky, I think it was, from M.I.T. had it completely wrong, but sometimes you have to be completely wrong to figure out what is right.
The interpretation of words can have many meanings, so the best a computer might do is think of several interpretations. I don't think the computers will interpret any better, only maybe a bit faster. Many of us have seen computers fail miserably, and I think that will continue as long as people program them. They say that they will program themselves, but the idea still started from humans.
Leroy Added Feb 14, 2018 - 8:19pm
It's not that AI can't give us the right answers so much as it is that we refuse to accept it.  Mortgage lending might be one example where AI is used to determine whether or not a person is a good risk.  Discriminating by race is an unacceptable answer.
Katharine Otto Added Feb 14, 2018 - 8:35pm
Ian,
Good article.  I don't believe in predictions.  The idea that a computer will ever be able to predict human behavior, or the behavior of groups, is ludicrous.  Even if you had a perfect computer, you would need perfect humans (in the programmers' visionary eye) to make the predictions approach validity.
 
Robert Wendell,  I agree with much of what you say regarding the spiritual component of human consciousness, but the human brain is much more complex than even the neuro-scientists imagine.  For one thing, while neurons generally don't reproduce after a certain age, they are constantly sprouting new dendrites (receivers of impulses) or allowing them to wither away.  Since neurons make numerous connections with other neurons, this means the brain is constantly re-wiring itself.  I don't believe AI will ever match this level of complexity or plasticity.  
 
Dave,  I don't want to be on the same streets with driver-less cars, because no matter how fallible people are, machines are more so.  Modern technology is over-rated, according to me, and it is not that reliable.  It seems to fail when you need it most. 
 
Doug Plumb Added Feb 14, 2018 - 8:55pm
I wonder if they will put a bar in place of the steering wheel and pedals. That would push things along for driverless cars.
Leroy Added Feb 14, 2018 - 9:14pm
"I wonder if they will put a bar in place of the steering wheel and pedals. That would push things along for driverless cars."
 
Great!  You could get sauced going to work.
Robert Wendell Added Feb 14, 2018 - 9:52pm
Yes, Jeff J.! The computers represent outboard bits of human intelligence. However, neural networks are learning devices. Someone still has to program the interfaces. The networks themselves are also usually simulated digitally. That's where the human programming ends unless we include the protocol design that presents the information environment that in turn feeds the neural network's learning process.
 
After that, the neural networks program themselves via the recursive process of attempting answers, receiving the results and trying again. It's an iterative process much more like human functioning and gaining experience. It's nothing like the human programming of digital devices. There is no hint of these differences in the article. It's as if AI were nothing more than humans trying to program devices to be intelligent. Wrong!
 
That's quite far from what AI is doing these days. It wouldn't be nearly as successful as it's getting to be if it were not very much more than that. Again, we're attempting to reverse engineer organic brains.
 
We're not very good at that yet, but we're getting better fast. This is true at both ends:
* understanding the brain much better
* building improved machine analogs of the brain
 
That is the essence of what AI is today. It's not about programming machines to be intelligent. Today's consumer devices are not examples of AI. Nobody's laptop or desktop computer is an example of AI as some comments here imply.
Robert Wendell Added Feb 15, 2018 - 12:35am
Katharine said, " I don't believe AI will ever match this level of complexity or plasticity." - Katherine Otto
 
You may well be right about that, Katherine. I don't know. But that doesn't make the fundamental assumption behind this article any more correct. That assumption of the article seems to be that AI consists of programming algorithms to imitate all the behaviors we want to see in an AI system of any kind.
 
That is far from what AI currently is or aspires to be. There may be a limit to how closely we can match the level and plasticity of the human brain. This aspect of plasticity in producing a quasi-human AI brain analog is something I suspect would be the most difficult part (if even possible at all) of an otherwise successful attempt.
opher goodwin Added Feb 15, 2018 - 3:52am
Ian - yes there is more to intelligence than just processing large amounts of data. But on the other hand algorithms are extremely good at predicting human behaviour and attitudes. They can analyse out internet behaviour and predict our buying habits and interests. Soon there will be no need to go and vote your computer will know how you should vote better than you will.
Neil Lock Added Feb 15, 2018 - 5:14am
Ian: I expect that "people analytics" will, in due course, go the same way as the passing fad of "time and motion studies" in the 1970s.
 
There seems to be a potential problem with the idea of an AI trying to simulate a brain. How will we (or it) know whether or not the results of its thinking are correct, relevant or even sensible? It's actually worse than the problem of testing programs produced by humans (something I've done professionally for many, many years). At least there you can compare the results with the spec (assuming you have one!) and say "that's wrong, it should have done this." But with an AI, how do you work out whether it is producing sense or nonsense? As Katharine says, "no matter how fallible people are, machines are more so."
opher goodwin Added Feb 15, 2018 - 5:27am
Hopefully we can always pull the plug on a machine!
Doug Plumb Added Feb 15, 2018 - 5:47am
On a youtube talk, one of these scientists said that AI isn't great at everything, but it tends to do well at things people do well at.
opher goodwin Added Feb 15, 2018 - 6:45am
Doug - the speed at which AI is developing is drawing in the big bucks. I think we will see major developments because of that. There is no telling where this is going.
Leroy Added Feb 15, 2018 - 7:07am
"Soon there will be no need to go and vote your computer will know how you should vote better than you will."
 
During the election, my millennial colleague couldn't decide how to vote.  One morning while traveling together, he expressed with glee that he had found an application to help him determine how he should vote.   Maybe we are already there.
Ian Thorpe Added Feb 15, 2018 - 9:59am
Leyroy, ah but that's a millenial you are describing, a very different creature from a human being.
Ian Thorpe Added Feb 15, 2018 - 10:04am
Doug, I think you hit the nail on the head there, AI does well at things people do well at because it needs people to program it. As for all this stuff about computers learning by looking for patterns, that's exactly how the analytical tools being written in the early 1970s worked. Except we did it with 256k RAM (If we were lucky).
Ian Thorpe Added Feb 15, 2018 - 10:15am
There is obviously a lot of confusion about what neural networks are and how they work in comparison to a brain (not necessarily a human brain,) and a few gullible people who are so besotted with technology willing they are willing to believe all they hype put out by the tech corporations that I need to do some extensive explanatory articles (not that the truly faithful will believe me.)
 
For now I will just add a caveat emptor to those suggesting we should blindly buy into all the hype about AI and a future of transhumanism. Just pause a while and consider the plans of corporations like Google, Amazon and Microsoft, the Internet of Things, implanting chips in our bodies and consider how much control these very dodgy businesses are planning to exercise over our lives.

I'll get back to everybody eventually but Autumn does not like long strings of comments by the same person, and I agree with her, it looks horrible. Which of course gives those of us in Europe or the Antipodes a problem as we are sleeping when the threads are at their most active.
Neil Lock Added Feb 15, 2018 - 10:38am
Ian: I don't think Autumn intended the two-consecutive-comments rule to apply to headpost authors. Opher in particular chooses to respond to comments individually. So do I, most of the time.
 
Anyway, I look forward to your thoughts on AI tomorrow!
Doug Plumb Added Feb 15, 2018 - 11:51am
If we didn't have a war making state and the occupational forces we have all over the world, AI would not be developing at the rate it is. We may not yet even have the transistor.
  I think modern people are smarter - it takes quite a bit to learn how to use a computer - more than any other machine I could think of anyway. People can use their computers for mental stimulation or just to play games - which are a form of mental stimulation. Almost everything you do involves your mind and its not limited.
  I think people that like ideas are a lot happier in this tech world. I am I think, but I don't think the body count and corruption is worth it.
Prof Claudewell Thomas Added Feb 15, 2018 - 11:53am
Ian:
I also find a lot of millenials odd, but  I belong in the era of bell,book and candle.What if the algorithm's future is determinable only by the occupants of the brave ' new ' world? Transhumanism is a frightful possibility even if it means a disconnect with mankind's historical /evolutionary past.
 
opher goodwin Added Feb 15, 2018 - 12:03pm
Leroy - according to Harari your computer knows more about you that you do. They'll soon be making all manner of choices on our behalf.
Bill H. Added Feb 15, 2018 - 12:31pm
 
As we blindly allow machines to make more and more of our decisions for us, we are witnessing the effects of our diminishing ability to make decisions on our own.
As these machines become manipulated by those who seek to control our lives and make our decisions for us, we in turn continue to obediently supply them with even more information to make even more decisions that affect and control our lives.
Our search engines, social media, and "smart phone" apps are designed to not only analyze our preferences and lifestyles, but also to manipulate and control them. In turn, we are "rewarded" with instant gratification and what we believe are products and information that is exactly what we want and need. All we need to do in return is supply our "controllers" with information concerning our preferences, location, politics, DNA, income levels, buying habits, medical history, age, real-time driving habits, and an increasing intrusive list of other data that we have been lulled into believing is "normal" in this day and age, along with actually believing that the information we supply will go no further than the recipient requesting it. Do some research on the latest ultra-invasive technology like the Amazon Echo and Google Home. Do you really believe that the only time information is sent to their voice analysis servers is when you utter the phrase "Alexis" or "OK Google"?
Just as the idea of  UBER was based on the implementation of driverless cars, Big Data is pushing the limits of data collection to allow the implementation of the implanted microchip as the final frontier.
So we just sit back and let it happen, right?
Ian Thorpe Added Feb 15, 2018 - 12:52pm
Neil, OK I'll try a few comments and see if I get a bollocking. I still think a long string of comments by the same person is visually unattractive and makes the thread hard to follow.
You're right about the problems of A I trying to simulate a brain. Human intelligence is not purely data driven and to model a brain we would need a complete understanding of how brains work - and we are a long way from having that.
Twenty years ago I suffered a massive brain haemorrhage which resultedin my losing about 35% of my brain tissue. The prognosis was that I'd spend the rest of my life in bed, would have lost most of my memory and cognitive ability (as well as movement on my left side,) and my family were warned that I would no longer be the person they had known.
Well I never recovered full movement but I get around with the aid of a stick (and a hot hatchback), memory and thinking skills returned within two months and I still had my sense of humour. I'm not unique, there are many such unanticipated recoveries on record. On the other hand some people can get a minor bang on the head and lose memory, speech, everything (I was in rehab with such people).
About a year later while talking with one of the doctors who dealt with my case, he told me I had surprised all the staff at the hospital but very interestingly said, "We are just beginning to understand the brain and the mind are two very different things."
It seems that if the A I boys do ever manage to model the brain, that's only the beginning for them.
Ian Thorpe Added Feb 15, 2018 - 1:10pm
Dave V, my reading, and I'm a petrolhead from a family of petrolheads so I read quite a bit about cars, suggests if the tech companies were honest fully autonomous cars are a lot further away than we are being told.
Aside from the major problems (they just don't work in bad weather) I think one problem that is not discussed is that far too many of us love driving our cars far too much to give them up easily. While I can see autonomous cars being popular in cities like London or New York, the rural lanes close to where I live are a joy to meander along with no planned route in mind.
Doug Plumb Added Feb 15, 2018 - 1:14pm
re "Big Data is pushing the limits of data collection to allow the implementation of the implanted microchip as the final frontier."
 
I think the bigger and more immediate threat is electronic currency. Once that happens the state will be able to shut you off and you will go to them. No need for courts or cops. Imagine if after ten years all the machines turn off. No one has any money or contact with people they normally interact with. Society could be destroyed just by pulling the plug. That's why everything forbidden must be made legal so there is no reason for anyone to have cash.
Ian Thorpe Added Feb 15, 2018 - 1:19pm
Tamara, I'm with you there. Though parents and grandparents may sit for hours with children going through the ABC books, a is for apple, b is for bike, c is for cat, learning by rote is not the only way humans learn. One of the most important ways is learning by osmosis, we absorb a lot of information from our environment and often find we know stuff that we were not aware we knew.
Then there are things all creatures are born knowing. Nobody tells a young deer, bear or wolf that fire is dangerous yet the first whiff of smoke from a forest fire and they're smart enough to run away.
 
There are many mysteries in life that science is nowhere near understanding at the moment but until we do understand them we cannot build machines that will emulate intelligence.
Ian Thorpe Added Feb 15, 2018 - 1:33pm
Jeff: re Minsky & Co.
"Humans, they maintained, are actually machines of a kind whose brains are made up of many semiautonomous but unintelligent “agents.” And different tasks, they said, “require fundamentally different mechanisms.”
 
Yes, I've heard this many times over the years. In my experience many people who work in the academic field are big on book learning but have a dearth of life experience and thus little understanding of how humans and animals work.
Pavlov's dogs may be a valid experiment in controlling behaviour, but I always ask why Pavlov chose not to work with cats.
Ian Thorpe Added Feb 15, 2018 - 1:44pm
Katherine, too often in our Brave New World we come across ideas that would be great if only all people were the same. How many times do we see some do - gooder declaring, "The world's problems could be solved easily if only everybody would stop living for themselves and learn to live for each other."
Any proposed solution that includes the phrase "if everybody" is doomed because of course 'everybody' will not to the same thing.
I think the failure of communism and socialism prove the point. These political philosophies (collectivism,) rely on people surrendering their individuality and working only for the common good.
As people in Cuba were saying a deade after the murderousd batista regime had been overthrown by Castro: "Better blood with Batista than hunger with Castro."
Somehow, once the incentive of self betterment is removed, humans lose interest in the common good.
Ian Thorpe Added Feb 15, 2018 - 1:48pm
Opher,
"But on the other hand algorithms are extremely good at predicting human behaviour and attitudes."
Then why can't Amazon suggest for my next purchase a book I might remotely be interested in reading. Alorithms are very bad at predicting human behaviour. Trump. Brexit.
Ian Thorpe Added Feb 15, 2018 - 1:50pm
Leroy,
"Soon there will be no need to go and vote your computer will know how you should vote better than you will."
 
I though that happened in blue states over there already. Or is it just dead people the machines vote on behalf of :-)
Ian Thorpe Added Feb 15, 2018 - 1:55pm
Doug, it's certainly true that military thinking and the desire to weaponize technology has driven the advance of technology much harder than business needs have. I do know however that way back, when the talk was of "The Information Superhighway" the powers that be were aware of the potential for social control information networks offered.
Ian Thorpe Added Feb 15, 2018 - 1:56pm
Prof: I'll get to transhumanism eventually, watch this space.
Ian Thorpe Added Feb 15, 2018 - 2:11pm
Bill, I think governments are relying on us just sitting back and letting it happen, however there does seem to be a growing resistance. I can see one future scenario in which humanity splits into the slaves of technology, and those willing to trade material comfort for independence.
Uber is a very interesting case. From being a ride sharing app that did not really have a viable business model they seemed to transform very quickly into a global cab company that thought it could ride roughshod over local laws and regulations.
That transformation probably took place because people with money to invest and a quasi - religious faith in technology startups were prepared to invest big bucks, but the question is can such a massive cash burner survive until driverless cars are ready to take to roads in real driving conditions.
Ian Thorpe Added Feb 15, 2018 - 2:36pm
George, (I said I'd get to everyone eventually but this thread has gone a bit mad,)
There does seem to be a major push to replace human workers and human minds, without anyone appearing to have thought about what to do with all us humans when the ruling elites have no further use for us.
Or maybe they have thought about it and are just not saying because the truth would be the tipping point that drove us to take up our pitchforks and cudgels and march on the seats of government.
Ian Thorpe Added Feb 15, 2018 - 2:38pm
Dino, the human element, especially in social policy, is very important and it can never be incorporated into algorithms.
Doug Plumb Added Feb 15, 2018 - 2:58pm
Prof: I'll get to transhumanism eventually, watch this space.
 
Good.
Doug Plumb Added Feb 15, 2018 - 3:00pm
Prof re "Transhumanism is a frightful possibility even if it means a disconnect with mankind's historical /evolutionary past."
 
Are you saying a disconnection from our history is a good thing?
John Minehan Added Feb 15, 2018 - 5:40pm
"And until machines can handle unpredictability we should stop indulging fantasists by talking about Artificial Intelligence and refer more realistically to data processing."
 
The flaw in that argument is that if you have more data less is unpredictable.
 
There is value in tracking anecdotal data, as it implies two things if it does not fit with the analysis: 1) you have measured wrong; or 2) you have measured the wrong things.
 
Both the Trump election and the BREXIT result were not unpredictable if you questioned what people were telling pollsters.  You might have questioned that based (for example) on what Salena Zito was reporting about Trump support in the Rust Belt.
 
Big data is a revolution, but nothing is perfect.    
John Minehan Added Feb 15, 2018 - 5:48pm
"There does seem to be a major push to replace human workers and human minds, without anyone appearing to have thought about what to do with all us humans when the ruling elites have no further use for us."
 
I really doubt the future is (fully) human.  Some kind of augmented human is probably the end-state.
John Minehan Added Feb 15, 2018 - 6:04pm
"It's not that AI can't give us the right answers so much as it is that we refuse to accept it.  Mortgage lending might be one example where AI is used to determine whether or not a person is a good risk.  Discriminating by race is an unacceptable answer."
 
That was also a great example of a corrupted data field. 
 
NINJA loans were perfect for those who had high incomes in the illegal economy.  (Tony Soprano did not have to pretend to be a waste management consultant.)  "Straw man" purchasers meant that foreigners, constrained by post 9-11 limitations on real estate purchasers by those outside the US could play in the hot US Market. 
 
All of this meant that the algorithms used to value derivative instruments were invalid.  Worthless investments can be written off.  Undefined investments have to be held to see what the value turns out to be.      
Prof Claudewell Thomas Added Feb 15, 2018 - 6:09pm
No Doug, on the contrary the loss of that connection is a frightening prospect.It does occur to me though that my kinship to bell,book and candle might blind me to the possibility that another generation might not get the schrecks at the prospect of loss.With or with ought religious sanction human connectedness to self and others past and present is key.
Prof Claudewell Thomas Added Feb 15, 2018 - 6:51pm
Correction ...without...not with ought...
Eileen de Bruin Added Feb 16, 2018 - 12:30am
Technology’s benefits should ease the burdens on man. But to think on our behalf?  Just based on previous and limited patterns of behaviour only collected via key boards and social media?  Seems rather sweeping and narrow.
Dave Volek Added Feb 16, 2018 - 12:26pm
Ian
The article you linked to was written in 2013. I think a lot has happened with software for autonomous cars since then.
 
I recently saw a documentary on Science Channel. We are probably a few more years from a few autonomous cars. But I can see an intermediate future where long-haul trucking will be a thing of the past. Highway driving, not as complicated as city driving, is pretty close to being released to the public. I envision staging areas just outside of cities where robot-trucks come from the highways, park, and wait for a human driver to take the load into the city.
 
Just the economics of having a truck being able to run 24 hours a day is going to build the infrastructure for these staging areas.
 
For city driving, I think autonomous cars will be on the streets before autonomous trucks.
 
 
 
 
 
Even A Broken Clock Added Feb 16, 2018 - 12:38pm
So who in silicon valley is working on developing the positronic* brain? That's what we really need.
 
Ian, very good post and obviously it struck a chord since it has generated so much discussion. Thanks for sharing your personal story as well.
 
* If anyone was not familiar, positronic brains were a main feature of Isaac Asimov's robot stories.
Ian Thorpe Added Feb 16, 2018 - 1:06pm
John M.
"if you have more data less is unpredictable."
I disagree. The more data we have concerning human behaviour the more variables are introduced. While most of us do follow repeating patterns of behaviour, we can vary it at any time.
When I was involved in horse racing as a member of an ownership synidicate, because of my computing background I made several attempts to write a program that analysed form. They were reasonably successful but weren't going to make me rich. Watching for the human aberration, a trainer not using one of his regular jockeys, or sending a single horse to a track at the other end of the country for a race that bore only a modest prize are signals that a betting coup is organised (Grandad Redfern, a bookie in the days before computers taught me these things.)
So while the program was excellent at highlighting the horses with little chance of winning, the human element was a better pointer as to which would win. The other was inside information from the trainer who looked after our horses.
Ian Thorpe Added Feb 16, 2018 - 1:12pm
John M. I think it is fair to say Trump and Brexit were, in the context of this thread, unpredictable as the big data boys failed to predict them. Here again, as you point out, looking at the human factors rather than the statistics was a more useful guide to the mood among voters. 
Ian Thorpe Added Feb 16, 2018 - 1:21pm
Prof. Claudewell, the disconnection from our humanity is indeed a frightening prospect. From your perspective, which I guess is very different to mine, do you find the increasing pressure on us to experience the world through our computers, thus isolating ourselves from a lot of casual human contact (in shops, at the bank etc.) has some sinister implications.
I know that in Britain the shift to online trading, helped by government policy, combined with restrictive laws and punitive taxes on independent local businesses have made many of our town centres look like ghost towns.
Ian Thorpe Added Feb 16, 2018 - 1:25pm
Eileen, I think your point is proved by the irrelevance to our actual lives and interests of the ads fed to out screens by algorithm driven 'targeted advertising'. 
Ian Thorpe Added Feb 16, 2018 - 1:31pm
Dave V, I was aware the article was not recent, but a lot that has been written since by the driverless lobby has been hyperbolic. One of the things we are not being told about autonomous car technology is that even if we accept the best estimates for economies of scale, the control systems will be so expensive they will make autonomous cars prohibitively expensive for all but the few.
And the statistics backing claims of greater safety have been found to be seriously flawed too.
Ian Thorpe Added Feb 16, 2018 - 1:35pm
EABC, yeah, and we will need some of the laws from I, Robot to make sure the machines don't go rogue and kill us all :-)
Edgeucation Newmedia Added Feb 16, 2018 - 4:06pm
An algorithm is not perfect. Further, it is only as good as the information put into it. In most cases; garbage in---garbage out. 
 
Bill H. Added Feb 16, 2018 - 5:10pm
 
Ian - One concern I do have is if the operating system will be based on Windows. If so, could you imagine what would happen with all of these vehicles if Microsoft decided to download Windows Updates during rush hour?
CRM 114 Added Feb 16, 2018 - 8:33pm
There are several problems with AI. The first is that to program it correctly, you need a subject matter expert, a systems expert, and a programming expert. People with all three abilities are rarer than rocking horse droppings, and a lot gets lost in translation between the two, three, or more people needed. Secondly, the ultimate objective is rarely optimum efficacy. The developing company looks to make a profit, any Government employees have to fit with policy, and far too many are more concerned with their reputation than the output. Next, political correctness has to be factored in. This inevitably rejects certain methods and possible outcomes. Lastly, AI systems are notoriously bad at dealing with the unexpected. They don't anticipate it, having not had the broad-based education of a good human expert, and it's very difficult to program any value-based weighting to come up with what a human would regard as the 'correct' decision where there are competing 'ultimate aims', especially in a time-limited situation and where lives are at stake.
Doug Plumb Added Feb 17, 2018 - 7:06am
I read on an really exhaustive AI website that you need three things now to do serious work in AI: (1) A knowledge of Shannon (2) Kant (3) Linear Algebra. I don't know if its true, but the site seemed credible. I know there are some very good books on the topic, I'll read one maybe one day.
Has anyone on here worked in AI?
John Minehan Added Feb 17, 2018 - 7:45am
"When I was involved in horse racing as a member of an ownership synidicate, because of my computing background I made several attempts to write a program that analysed form. They were reasonably successful but weren't going to make me rich. Watching for the human aberration, a trainer not using one of his regular jockeys, or sending a single horse to a track at the other end of the country for a race that bore only a modest prize are signals that a betting coup is organised (Grandad Redfern, a bookie in the days before computers taught me these things.)"
 
But that is exactly the kind of "enough data" I'm talking about.
 
Easy enough to comb those kind of details out of a data set. 
 
An important part of "Big Data" is finding those kinds of "tells" to find in a data set (and teaching the machine to find others).
Eileen de Bruin Added Feb 17, 2018 - 9:00am
Robert and Ian,
 
AI is merely programmed avenues, irrespective of the fact that the word “intelligence” is used and assumed to be more than it is, indeed as Ian says. 
 
Robert, any database or knowledge base upon which any copy neural networking will have to be based, is delimiting. This is irrespective of how ITs self teaching attribute is set up.
 
To assume otherwise is indeed to make an ASS out of U and ME.
Ian Thorpe Added Feb 17, 2018 - 11:22am
 Edgeucation Newmedia: Absolutely correct, GIGO applies to A I just as it did way back when I started my career in the 1960s.
Ian Thorpe Added Feb 17, 2018 - 11:27am
Bill H: LOL yes, there are things like that to be taken into consideration.
Ian Thorpe Added Feb 17, 2018 - 12:06pm
CRM114, computers and A I are great at dealing with specifics, where there is vagueness, where interpretation is needed or things have to be put into context they are quickly in trouble.
To the best of my knowledge the Turing test (to ascertain whether a computer could hold a five minute diaglog with a human without the human realising their conversation partner was not human,) has been passed only once, by a chatbot called Eugene Goostman in 2012. The validity of the result should be questioned however, because the judges were told they were communicating with a 13 year old boy in Ukraine.
Here's a snippet of conversation with Eugene Goostman:


[16:11:25] Judge: what music do you like?
[16:11:32] Eugene: I like Eminem – because he doesn't have a thing to do with "polite-correctness".
[16:11:46] Judge: OMG i love eminem, he's my favourite artist!
[16:12:00] Eugene: I wonder what Eminem's nick mean. I associate it only with a word "enema". But he is a cool guy, I like his "The Real Slim Shady".
[16:12:55] Judge: i know what his nick name means, his real name is Marshall Mathers. From the initials M&M phynetically sounding like Eminem.
[16:13:06] Eugene: Eminem is a famous crapper. No, raper... ripper... Yeah, I mean - rapper. I like his "Stan" and "The Real Slim Shady".
[16:13:32] Judge: I like "Hailies song"
[16:13:46] Eugene: Interesting tastes. I like science fiction books, by the way. Where do you came from, by the way? Could you tell me about the place where you live?
[16:14:22] Judge: Epsom, home of the derby. yourself?
[16:14:27] Eugene: My guinea pig says that name Derby sounds very nice.
[16:14:41] Judge: i shopuld talk with him sometime
[16:14:48] Eugene: I think you should take some typing lessons. Or just type more accurately.
[16:15:04] Judge: that's very rude?
[16:15:09] Eugene: Ummm... Frankly, I didn't get your question:-(
[16:15:49] Judge: "Ummm" isnt a word. i think you need typing lessons
[16:15:58] Eugene: My thoughts are same. By the way, I still don't know your specialty - or, possibly, I've missed it?


 Learn more about the chatbot. Test chatbot at Princeton AI
Prof Claudewell Thomas Added Feb 17, 2018 - 12:34pm
Ian,Doug,Eileen:
Just Yesterday I participated in a conference call with Peter Salovey the president of Yale University,I learned that university endowments wil nowl be taxed at horrendous rates.Yale in excess of $35 million.Since he had been at Davos and reported on the loss of U.S. esteem and had partially answered my previously submitted question about "firstist" notions   being contrary to the assemblers view of the need for global solutions,I asked about digital communication and social media lending to simplistic ,unworkable solutions and facilitation of radicalization and revolt.The answer was that the conference educators almost unanimously agreed that the thread of intellectual discourse could only be preserved by the transmission of " content" and that somehow the universities had not only to practice that in their core facilities now extended throughout the world but to help educators at all levels to do the same.Polarization,radicalization,sedition and revolt seem to be the outcomes of widget training.I assume that he was talking about the Socratic method.Given that the universities are being subjected to enormous financial burden and that the free press is under attack worldwide, it seems to me that the 4 th and 5 th estates alone cannot save us.AI and the magic of the algorithmic age can fit into Obama recommended junior colleges ( violently opposed by the right) but they cannot answer the need for the transmission of human complexity.Someone mentioned the need to deal with the unexpected.I suspect that the ability to do that lies in that complexity.Apologies for the rant !
Ian Thorpe Added Feb 17, 2018 - 12:47pm
John M.
It's quite easy for a human to extract such data from publicly available information however trying to quantify it in order to correctly weight a program for analyzing the form of a racehorse it rather difficult.
And those were very simple examples. More difficult to deal with would be the case of one of the mares in a race coming into season. When the mare is in season it is obvious and no responsible trainer would sent a mare in that condition to a racecourse (and she wouldn't win anyway).
But nature being much smarter than us or any A I program, stallions are aware before we are that the mare is almost ready to breed. So any 'entire' in a race with such a mare would not have its mind on running and might even fall over its own dick during the race.
And then there is how the horse feels on the day, blood tests, saliva tests and body temperature can tell us a lot but horses are sentient creatures, not machines. And they can't tell us how they feel.
The horse's regular handler however, can through familiarity and experience understand that things are not quite right.
A I has been used without much success to predict earthquakes and volcanic eruptions, a better indicator is when the larger wildlife starts leaving the area. There may be logical explanations for this, such as disruptions in magnetic fields, but it seems nature is more attuned to the environment than technology.
Robert Wendell Added Feb 17, 2018 - 1:03pm
"Robert, any database or knowledge base upon which any copy neural networking will have to be based, is delimiting. This is irrespective of how ITs self teaching attribute is set up." - Eileen
 
This is true in practice, Eileen. However, your statement seems to ignore that the learning process in neural networks entails the introduction of new data despite that it's not necessarily in digital form. Neural networks can electrically process sensory analogs.
 
It's far cruder than what human neurological networks can do, but your eyes and ears and every other sensory input organ turns light, sound, etc. into electrochemical analogs that your brain processes. This is perpetually feeding us new data. To pretend that a database is the end-all and be-all of what neural networks can deal with is to miss a very fundamental piece of what AI is and aims to be.
 
The whole purpose of neural networks is to transcend what digital process and human-authored algorithms can do. This discussion always seems to end up saying or at least implying things that completely ignore this.
Ian Thorpe Added Feb 17, 2018 - 1:07pm
Eileen, the self - programming abilities of A I systems are based on object oriented programming, ( which has been around quite a long time). For people not sure what object oriented programming is, the link goes to an Oracle Corp primer.
With the tasks we are likely to require a system to perform being dealt with in pre - written program modules it becomes a lot easier to write software that enables a computer to assemble the necessary modules without direct human intervention. Though the machines can be said to be programming themselves, they are not actually learning, only doing what they are programmed to do. Just as Google's claimed success in image recognition depends on a computer being 'shown' many images of an object while software analyzes certain patterns of proportion, colour and shape.
And then we get to the data. The only way software, which does not have the ability to extract meaning from words and, as we saw with Eugene Goostman above, is totally defeated by metaphors and colloquialisms, can discern what may be valid is through keyword matching.
And they want us to believe this is intelligence?
Ian Thorpe Added Feb 17, 2018 - 1:17pm
Prof. Claudewell, rant away. I agree that the simplistic, one size fits all, global solutions fall way short of what is needed to deal with the increasingly complex problems faced by humans. One of my main worries about A I is that many of its advocates appear to be rather obsessive, to the extent that they have missed out on a lot of life experience. how do such people program empathy, compassion, esprit de corps and other very human qualities into their non human systems, having never themselves experienced such things.
There is a great danger in getting over excited about techological advance, which may be 'cool' but is not necessarily beneficial to human communities.
mark henry smith Added Feb 17, 2018 - 2:34pm
Slam AI by Marko (C) 2018
 
What's intelligence got to do with anything?
Would an intelligent society have elected Trump?
Hitler? Pol Pot? Could a computer do better?
Let's give it a shot.
And driving. Have you ever driven around here?
They drive so fast and honk at anyone doing the speed limit.
Scoot around them on the curb.
I can't imagine a computer doing something that absurd.
And investing, it's all made up anyway.
The only trick is be faster than the other guy
in milliseconds fortunes can be made.
Why not let computers make that trade?
And don't tell me machines can't do some things better,
cutting patterns, kneading dough, blowing up bombs,
sniffing out drugs, cleaning rugs,
investigating ancient tombs.
But I'll tell ya one area where measuring tendencies
will never get the best of human and animal ingenuity
because as said above our intelligence makes random connections
based on what we ate for breakfast,
the weather, a tickling feather,
a moment of reflections.
And I don't care how the industrialists
the scientists, the mechanists research and scheme,
they'll never make anything that can have
as much appeal to a living beings spirit  
as the awesomeness of a vivid dream.
 
Love to you, Ian et al. And love is hard to fake too. Real love.
I had a dream last night that I was in sort of my old apartment at the church with my old babe and my older brother, I was drinking beer and one was moldy when I took the cap off, so I went back for another, and got a can, then my ex's oldest friend said she treated me like dirt, and we said it at the same time and then my brother was standing there with a handful of beer bottle caps with no place to throw them out, and he was very distressed about it, after which I climbed down a traffic light beneath a sheer cliff on bungie cords. And in the dream I tasted beer, all kinds of colors abounded. It was fantastic. The only thing missing was popcorn.
 
When they make a computer that can dream like that, we're toast, we writers and artists.        
Ian Thorpe Added Feb 17, 2018 - 2:45pm
Marko, "And love is hard to fake too": Yeah well we can get onto the allure or otherwise of sex robots another time.
Good poem, I love the last five lines. And I pity anyone who has never had a dream so vivid the memory of it seems as real as a memory of any conscious experience.
Robert Wendell Added Feb 18, 2018 - 12:15am
What does object oriented programming have to do with neural networks except in the programming that simulates them? That doesn't affect their essential nature or what they do , which has nothing intrinsically digital about it at all.
 
 
Music's essential nature is not digital either, but most of it is now distributed via digital media. We can simulate sound digitally, but that doesn't change the essential nature of what music is or does.
 
Likewise, how anyone goes about programming simulated neural networks says nothing about what's going to happen when the neural net starts to react to feedback on its output. Besides, exploration of direct physical implementation of non-simulated neural nets is increasing as we write.
Ian Thorpe Added Feb 18, 2018 - 12:45pm
Robert, you're the only one here talking about neural networks. I certainly, and possibly most other people in the thread, understand that "neural networks" is just a buzzphrase beloved of silicon valley nerds "Wow, computers that work like human brains only without emotions, how kool is that?" they witter, so obsessed with technology they are completely unaware that studies carried out in respected universities show that computer networks are nothing at all like brains, (not even remotely approaching even a lower mammal's brain in complexity,) and probably never will be.

And WTF are you blethering on about digital music for when you obviously don't have a clue how digital information is transformed to an analog sound wave or for that matter a picture, or an image from a medical scanner. I did this kind of stuff for a living for years. You will find the basis of all these processes is an ALGORITHM named the Fourier Transform which assigns a digital value to some form of energy wave, the light being reflected by an image during scanning, electrical impulses sent along a telephone line or from an analogue  playback device (in the case of digital music first from the storage device through a digital analog converter device which produces analog waves in the human voice frequency range and in the case of MRI scanners, by bombarding a target object with electro - magnetic radiation to excite the atoms and capturing the emitted energy, assigning each frequency a numerical value which is then linked to a colour in the display.
It is all very clever stuff, but the 'clever' is initially designed or programmed and built into them by humans, the machines are not learning or thinking, they are doing what they are designed to do. There is no 'intelligence' (artificial or otherwise) in them. When the power is turned off they are just inert matter, devoid of life.
 
My advice to you is stay away from nerd websites, you will see many crazy claims on them about sex robots that are indistinguishable from real lovers, autonomous car that can navigate in heavy rain, fog or snow or a computer that can pass a Turing test without the judges having to be told they are talking to an east European kid whose English isn't great and who is not well informed on popular culture (see above).
Bill H. Added Feb 18, 2018 - 1:29pm
 
I have never found digital music of any type to be pleasing at all. It seems to be getting even worse as time goes on. CD's were to "edgey" or "crisp". SACD's were better, but still lacked depth. MP3's are terrible. I have a large vinyl collection, but also have some of my favorite albums that I have on vinyl on both CD's, SACD,s and in MP3 format. Nothing comes close to the vinyl, and vinyl is the only medium that seems to be able to present depth.
It almost seems that we are being "trained" to accept poorer quality audio in many ways as time goes on. Cellular carriers seem to thrive on this fact to hold off on the need to provide more bandwidth that would improve, or at least retain good voice quality. It has been noted by some that people who spend a lot of time both on cell phones and using forms of digitized speech devices tend to alter their speech patterns to a more "digitized" fashion, as in the familiar "croaking" or "vocal fry", along with pronouncing certain words with more of a synthesized speech pattern. Words such as "shouldn't", "wouldn't", "couldn't", and "student" are good examples. One may notice that many people have altered the "unt" to a more emphasized "ent", which I found common on text to speech converters and vehicle navigation systems.
 
Robert Wendell Added Feb 18, 2018 - 2:30pm
"...studies carried out in respected universities show that computer networks are nothing at all like brains, (not even remotely approaching even a lower mammal's brain in complexity,) and probably never will be." - Ian
 
Never said current neural networks approximate the human brain. Never said they ever will, since that remains a very open question with doubt being the smart bet, at least for now. It's just that essence of AI is the attempt to get better at approximating how brains work, no matter whether it's just an insect brain. You don't do that with programmed algorithms. You guys are completely off point talking about AI is if it were centrally focused on human programming.
 
On the stuff about digital music, I was referring only to simulating analog with digital technology. For you to assume I know nothing about that is the height of arrogance. Having programmed whatever you've programmed in the regard is irrelevant. I worked in audio electronics for ten years. I know what digital-to-analog technology and vice versa is and the principles underlying it. As usual, you completely missed the point as you have every other point I've posted here.
 
"A man convinced against his will is of the same opinion still."
Eileen de Bruin Added Feb 18, 2018 - 2:46pm
Ian, Yes, I know Oracle and entity relationship modelling having a BSc (Hons) in Business Information Systems as well as much practical experience. Yes, experiential learning and defining entities is very important but let us be clear,that Artificial intelligence is artificial, be it ever so logical and inclined to, perhaps, fewer prejudicial definitions.
 
And Mark Henry Smith is right, what the hell has intelligence got to do with balance, anyway?
 
 
Ian Thorpe Added Feb 18, 2018 - 3:08pm
Bill H, vinyl is making a comeback I keep hearing. I'm sure that is not down to nostalgia among people of our generation. MP3s sold commercially are lousy because they are compressed too much.  It is probably as you say to enable more to be pushed down the line.
CDs sampled at 256k are reasonable but lack depth and digital music as a whole does not seem able to deliver the same experience as analogue. Nothing of course beats hearing music played live in a hall with good acoustics.
Thanks for the info about changing speech patterns, I'd wondered why millennials seem to speak strangely. I'll do a bit more digging on that.
Ian Thorpe Added Feb 18, 2018 - 3:15pm
Eileen, my question about AI has always been, "If it's artificial can it really be called intelligence."
Have you ever played around with Wolfram Aplha, its designers describe it as a 'knowledge engine' and as such it is very good. It is easily confused by vagueness and ambiguity however.
Robert Wendell Added Feb 18, 2018 - 3:17pm
Yes, Bill H., the high-end audiophiles still like vinyl. I've listened to the same master both ways. The digital-to-analogy conversion was incredibly better than any computer or typical consumer players can do. It was a specially designed system that focused on unwanted digital artifacts that typical distortion meters, etc. designed for analogy systems fail to detect.
 
Bottom line: It sounded closer to analog than any digital system I've ever heard, but still lost to exactly the same master distributed as the vinyl release. There was some subtle warmth in the vinyl reproduction that even the ultra-high-tech, cleaned-up, digital-to-analog converter couldn't duplicate.
Robert Wendell Added Feb 18, 2018 - 3:38pm
The spell checker here seems to keep changing "analog" to "analogy".
Robert Wendell Added Feb 18, 2018 - 3:39pm
Take a look at target="_blank">this article!
Ian Thorpe Added Feb 18, 2018 - 3:42pm
Robert, you now tell me you never said neural networks emulate our brains. So what am I missing in this statement from up top of the thread.
"Machines can now learn. They don't do this by inventing new algorithms, but by methods that crudely imitate the way our brains work. As brain research and neurology in general advance, the adverb "crudely" will become less and less applicable."

Let's see if I can disabuse you of a few fallacious notions and help you gain a broadr view of the topic:


Luke Hewitt, a doctoral student at the MIT Department of Brain and Cognitive Sciences, is particularly concerned about the "unreasonable reputation" of neural networks. In target="_blank">a post at MIT's Thinking Machines blog, he argues that there are good reasons to be more skeptical.
 
Hewitt's central point is that by becoming proficient in a single task, it's very easy for a machine to seem generally intelligent, when that's not really the case.
"The ability of neural networks to learn interpretable word embeddings, say, does not remotely suggest that they are the right kind of tool for a human-level understanding of the world," Hewitt writes.


And what he calls 'human level understanding of the world is surely a rather verbose way of saying intelligence. In other words Artificial Intelligence is not intelligence at all, it is task oriented programming.
 
Elsewhere in his article Hewitt notes that machines when learning to recognise shapes need to be shown many times more images that a human would ever need. This backs up my contention that AI does not actually 'learn', it is just a big storage and retrieval system.
 
 
Robert Wendell Added Feb 18, 2018 - 3:49pm
Congratulations on the subtlety of your verbal comprehension, Ian! What is it about the word "crudely" that you fail to understand?
Robert Wendell Added Feb 18, 2018 - 3:56pm
Are you being honest with yourself, Ian, when you try so hard to poke holes in any opinion that disagrees with yours? Do such weak and lethally stretched interpretations really make any sense to you? Feckless attempts at rebuttals based on such radically spun arguments come off as deliberate obfuscation rather than anything remotely like dialectic discourse that sincerely seeks truth. This is clearly a waste of time. Bye!
mark henry smith Added Feb 18, 2018 - 4:15pm
Thank you Ian for that excellent explanation of how analogue is transformed into digital. I took away that energy pulses that correspond closely to a known phenomenon are given arbitrary designations and those designations are used to mimic the known uses of the natural phenomenon in an array of impulses, and perhaps move beyond being able to make impulses of the sort that occur in nature, but don't occur naturally. A similar activity might be making new elements based on our understanding of chemistry. The reason analogue, vinyl is making a comeback, in my opinion is because some older listeners like the less chopped, rounder sound of analogue because it's what they were used to growing up. Modern kids, to a large extent will have different tastes since what we like can only be based on what we know.
 
Eileen is correct in that balance is vital in creating a sustainable system and if we place too much faith in computers, that shave slight bits off of anything they try to mimic in nature, the bits might add up to something significant that we missed. We often forget that the natural systems we're trying to sustain were developed over unfathomable time horizons and the interactions happen in subtle ways that we might not yet have the instrumentation to measure. All computers can do, as stated above, is make calculations based on what we already know, or suspect. They cannot deduce why the information they are collecting might be important, that's for us to decide, or not. But a car can be driven effectively by a machine that registers where it is at all times and all objects it encounters within a certain perimeter. Potholes might be a huge problem for self-driven cars.
 
The reality of computer thought is it is not thinking abstractly. It is refining the thoughts it's given at incredible speeds, up to the speed of light, and beyond if the expectations of quantum computers is realized. But those computers will only be able to measure and assess what we have been able to assess, but within the streams of data they will produce might be avenues to new assessments, such as the influence of dark matter. If it's here, it's there in the data somewhere.  
Ian Thorpe Added Feb 19, 2018 - 3:29pm
MHS, actually Mark, the return to vinyl is, according to people in the music business I know, being led by younger music consumers. DJs, recording studios and such have always used analog recording and reproduction equipment and the retail consumers started wondering why their home systems don't sound as good.
The CD is 35 years old so most people listening to music today were not even born when digital music was first launched. I was working for Philips when the first production line was set up at a factory in Blackburn Lancashire, where BTW we know a lot about potholes according to The Beatles song "A Day In The Life."
Eileen de Bruin Added Feb 19, 2018 - 3:51pm
Wolfram Alpha...
Ian, no, I never worked with this but on superficial evaluation it does use knowledge based systems. As in, what we already know, so nothing new, but the access to all or many facts and facets can make it valuable in considerations.   Considerations, ah, here is a thought.
 
To consider the unknown.
Eileen de Bruin Added Feb 19, 2018 - 3:54pm
Mark Henry, indeed calculations on what we already know.
Aha, herein lies the issue, as you already have eloquently pointed out.
 
So where do we go from here?
 
By realising that we do not, nor simply cannot, know everything...?
 
John Minehan Added Feb 19, 2018 - 4:26pm
". . . set up at a factory in Blackburn Lancashire, where BTW we know a lot about potholes according to The Beatles song 'target="_blank">A Day In The Life.'"
 
First thing I thought of too . . . . 
Ian Thorpe Added Feb 20, 2018 - 10:16am
Robert, there is nothing about the adverb "crudely" that I do not understand. My argument is that neural networks do not even crudely emulate a living brain. Next part of my argument is that the thread is not about neural networks and what they may be in 10,20,50 years, or sometime in the far future but about where artificial intelligence is right now.
And when talking about neural networks, which by definition are attempts to simulate the brain (the clue is in the first part of the name,) you ought to be using the subjunctive case, machines that think like people are something wished for or imagined, but as Katherine Otto says back up the thread, the gulf between the two is unbridgeable with current technology.
Ian Thorpe Added Feb 20, 2018 - 10:25am
Eileen, indeed, we cannot know everything. I would say that one of the insurmountable problems for those trying to build computer networks that learn like humans is the problem of trying to digitize the senses of taste and smell. We learn from everything experienced and often it is the connections we make between information from those senses that influence decisions.
And there are other senses too, how would one even start to program a computer system to have a sense of humour.
Ian Thorpe Added Feb 20, 2018 - 10:26am
John M, can't say I ever counted them myself, but a couple of the deeper ones trashed tyres on my cars.
mark henry smith Added Feb 20, 2018 - 12:02pm
It appears from what I'm hearing from my followers around the world, that roads everywhere are becoming holy sites.
 
With the streaming of music now, it's easy for kids to return to analogue and hear the difference in sound quality, since the music I'm hearing on my smart phone may or may not be digitalized, I presume.
 
Eileen, where we go from here is forward and backward at the same time. That's what the next decade will make clear, that this headlong rush towards new technological solutions to our problems only will produce more, intractable problems, such as the automobile, truck, internal combustion engine. The concepts of ride-share, less private car ownership, electric vehicles is great, meaning source pollution, not diffuse, but it doesn't sole the problem of road building, the entire infrastructure needed to keep cars and trucks running smoothly to where we want them to go. Bikes, battery assist, will become much more common. Drones that don't need roads to get materials to consumers will become common. But the system we have in place, trains, trucking, stores, will remain dominant.
 
The big step in medicine will be replacing drug therapy, which messes up all of the natural balance in the body, with electrical stimulation to shut off and excite areas of the brain guided by computer imaging. In fighting cancer, cells from the person's body will be programmed to attack cancer cells by computers in very quick processes. We will learn how to make new bones grow, to regrow lost limbs, eyes, any body part. We will be able to have computers reform our faces to make us look exactly how we want to look. Computers will allow every machine in the world to be self-guiding if the parameters of the activity are known.
 
Quantum computers will allow us to map weather patterns so accurately that we will have down to the minute when precipitation will start, when an earthquake will happen, when a volcano will explode. Systems for purifying water, air, anything, will be refined to allow us to travel from Earth to Mars taking basic building blocks that can be put together by computers to begin the development of a Martian atmosphere, and in the meantime create an environment where live can thrive on that planet, and slowly expand the sealed vessel outward. The trips will be one-way at first, bringing more and more people, animals and supplies, and resources will be found on Mars to speed development. All possible because of computing power. Dark energy, spooky force, more and more secrets of nature will be unveiled. If w make it that far.     

Recent Articles by Writers Ian Thorpe follows.