« All Idiots Think Alike | Main | The Outsourcing Lesson of Boeing - Part 1 »

17 February 2011

Comments

I'm troubled by this post, Bill. I've been a subscriber to your newsletter for several years and I read it regularly, mainly for the wise insights I expect to find there.

I was shocked to read this entry, though. (Perhaps you might consider auditioning for Ricky Gervais' show?_

First though, full disclosure. I am one of those rather universally disparaged geeks; I actively make my living writing software for clients who value my services to support me at a reasonable standard of living. And I'm a veteran (in many ways): I've been doing this software gig for over forty years, retreading myself at an increasingly rate.

I have had to make it my business to at least understand, if not master, the foundations on which my application software must rest.

I have to tell you unequivocally, Bill, that you have missed the point of the entire Jeoparty demonstration.

If the show that you saw was simply about encyclopedic recall, then I would be the first to agree with you that what we saw was an exercise in excess and futility. In fact, when I first learned of the proposed show last week on Nova, that very thought was the first thing that came into my mind.

But the salient point is this: the software that the IBM researchers built demonstrated not intelligence, but a form of semantic understanding. We have never before seen anything made of silicon and steel that could show such a capability. This product shows that software alone can be built that will enable a system with such encyclopic recall to _make sense_ out of human language. This is the only significance of the exercise. But it is truly a profound one. It likely will, in our own lifetimes, do away with the decades-old complaint that 'the computer did what I told it to do, not what I wanted it to do.' To quote the Vice President, 'This is a big f****** deal!'.

BTW, I concur (and have been agreeing with your point for thirty years in manufacturing companies) about how management is swayed so easily by things that they can count. But that's just accounting. It's not management.

Doug,

My understanding of IBM’s “accomplishment” with Watson is like this:

Existing applications are very good at:

Ask: What is fact X?
Computer: Go get fact X and give it to user

Watson, however, can do:

Ask: What is answer to question giving only clues?
Computer: Follow clues to get Fact A, leading to Fact B, leading to Fact C, leading to Fact n … ultimately to Fact X … give Fact X to user as answer to question.

That is very good. However, the point of my post is that Fact X must be known if the computer is to provide answers. Watson is no more capable of determining the answer to the question if Fact X has not previously been defined than the first abacus was. In far too many cases Fact X is not only unknown, it is unknowable. IBM and the enterprise software community seem to be under the illusion that every aspect of business can ultimately be quantified. … that the task of management can eventually be boiled down to logic and data, which is an absurd proposition. The effect is that great weight is given to inputs to decisions where Fact X is known, and little weight is given to inputs that cannot be quantified. Watson is dangerous to the extent it convinces management that it will inherently improve their decision making when it offers nothing to provide weight to that which has not already been defined.

Bill

Bill, so you don't see a potential future application as a diagnostic assistant in the medical field (for example)? AI has failed at this for some time, but here is an approach that might have a chance. Doctors make errors all of the time and are constrained with respect to answers by their particular expertise and their breadth of experience. A patient might begin by stating symptoms. The diagnostician then must start asking questions and using the answers to differentially follow a path to a more limited set of ailments. At some point tests will need to be performed to help with the differentiating diagnosis, but which are the critical ones? The AI assistant could be kept current with many medical advances more readily than a human. Watson "learned" answers by "listening" to answers in the setup games. In the competition it was required to identify the most important parts of the clues in order to focus on potential answers. Sorry, but I see this as a small but important step forward in applying computing power to real, important problem solving. I'm with Doug on this one.

Bill,

Watson appears to be nothing more that an super advanced search algorithm (actually multiple algorithms) that runs at lightning speeds. There's no deductive reasoning...it simply helps find information faster and more intelligently. Same stuff Google engineers work on day in and day out.

I think if you carefully read the article, you will notice they use the word "help" several times when describing Watson's potential applications. It won't "replace" anything, especially a human's ability to think and reason. It'll simply help humans to make rational decisions more consistently by presenting more information almost instantaneously.

I have to agree with Doug...you are missing the point on this one.


Fellas,

I don't presume to know whether this will have some application somewhere. It may well facilitate medical analysis or be the super search engine driver. I specifically wrote about the the use of this as an enterprise tool. The quote was from the IBM guy in charge of such applications. I take issue with this as turbo-charging ERP, which is already relied on far too much and leads to poor enterprise management.

All this talk is making me very nervous! We went through a Lean Kaizen event last month and discovered that the root cause of our problem is, we have no control over our capacity planning for our machine shop. I suggested using simple forecasting software where you input your current capacity and the shipping schedule. Then you can see when you need to move shipment dates, increase capacity or purchase parts from outside vendors. We have our current ERP/MRP computer system running on manual. We currently do all of our planning manually by hand. And we do no capacity planning at all.

Next week we have some software people coming in to show us how to fully utilize our current ERP/MRP system. I’m scared about the prospect of letting a computer tell us how to run the company. My experience with “most” software programs is that there are constraints on how it operates. And “most” software people do not listen to what is actually needed. They are always trying to figure out how they can fit what you need into the existing framework of the computer program. As evidenced by Doug’s comment; “I have had to make it my business to at least understand, if not master, the foundations on which my application software must rest”.


Good quote Jim. We too are implementing a total ERP solution. I'm worried it will just cause more layers over bad processes, but it isn't my call so I'm along for the ride!

People are missing the point on Watson. The real thing to take away is not that it could retrieve an answer. Rather it was that it could decode standard language (not computer speak) and understand what was being asked. The fact that it did it quickly enough to find the answer and beat the Jeopardy contestants to the buzzer is basically just showing off.

Plain language communication with computers will be as revolutionary as the GUI was 30 years ago.

People should really see the Nova program about Watson for more insight.

Full disclosure, I am an IBMer but a hardware not a software guy and not at all involved with this project.

Jim,

Thanks for weighing in on this. I am old enough to remember the pre-GUI days, and if Watson represents a comparable improvement in usability, and user productivity it will certainly have earned its keep.

Bill,

Generally a big fan, but I must echo others that you're a bit off on the point here.

Trivia recall is what makes the game show hard for humans. Humans are bad at recalling things.

Computers are excellent at recalling things. Knowledge of this sort of trivia would be... trivial to program into a computer. Hook it up to Wikipedia and take an early lunch.

Language processing is what makes the show hard for computers. Humans are great at language processing. It's pretty much our single greatest cognitive achievement actually. Computers suck at it. Irregular constructions, puns, metaphors and all that are hard.

Computers that can better process language are actually a pretty big deal. Computers are still pretty hard for humans to deal with. IBM (and Google) are making big progress there and I think it's safe to assume that business will benefit in the end from developments in this area.

The comments to this entry are closed.

Subscribe

  • Get EvolvingExcellence via email:

    | Kindle | Mobile

    Over 10,000 daily readers.

Search the Blog

Twitter Updates

  •  

Author

  • Kevin Meyer
    Kevin is a former president of a medical device company and consults and speaks on a variety of lean enterprise topics.
    - More about Kevin
     

Sponsors

Books

  • The Simple Leader
    Personal and Professional Leadership Habits at the Nexus of Lean and Zen

    by Kevin Meyer

    50 habits from three decades of executive leadership experience.

    More information


    Evolving Excellence
    Thoughts on Lean Enterprise Leadership

    by Kevin Meyer and Bill Waddell

    A 458-page edited and categorized compilation of our favorite posts! All for only $29.95.

    More information