e99 Online Shopping Mall

Geometry.Net - the online learning center Help  
Home  - Computer - Artificial Intelligence (Books)

  Back | 61-80 of 100 | Next 20

click price to see details     click image to enlarge     click link to go to the store

$51.94
61. Empirical Methods for Artificial
$32.35
62. Artificial Intelligence and Literary
 
$44.05
63. Aaron's Code: Meta-Art, Artificial
 
64. Mind, Machine, and Metaphor: An
$33.49
65. Safe and Sound: Artificial Intelligence
$28.21
66. Collective Intelligence in Action
$17.98
67. The Philosophy of Artificial Intelligence
$79.26
68. Computational Intelligence: Principles,
 
69. Readings in Distributed Artificial
$54.28
70. Artificial Dreams: The Quest for
$72.71
71. Universal Artificial Intelligence:
$72.71
72. Artificial General Intelligence
$6.97
73. Artificial Intelligence: A Beginner's
$104.84
74. Artificial Intelligence and Simulation.
 
$12.95
75. Law, Computer Science, and Artificial
$25.65
76. Adaptation in Natural and Artificial
$37.00
77. Game Development Essentials: Game
$27.76
78. Artificial Intelligence (SIE):
$48.31
79. Adaptive Business Intelligence
$99.00
80. Autonomy Oriented Computing: From

61. Empirical Methods for Artificial Intelligence (Bradford Books)
by Paul R. Cohen
Hardcover: 421 Pages (1995-08-03)
list price: US$85.00 -- used & new: US$51.94
(price subject to change: see help)
Asin: 0262032252
Average Customer Review: 5.0 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
Computer science and artificial intelligence in particular have nocurriculum in research methods, as other sciences do. This book presentsempirical methods for studying complex computer programs: exploratorytools to help find patterns in data, experiment designs andhypothesis-testing tools to help data speak convincingly, and modelingtools to help explain data. Although many of these techniques arestatistical, the book discusses statistics in the context of the broaderempirical enterprise. The first three chapters introduce empiricalquestions, exploratory data analysis, and experiment design. The bluntinterrogation of statistical hypothesis testing is postponed untilchapters 4 and 5, which present classical parametric methods andcomputer-intensive (Monte Carlo) resampling methods, respectively. Thisis one of few books to present these new, flexible resampling techniquesin an accurate, accessible manner.

Much of the book is devoted to research strategies and tactics,introducing new methods in the context of case studies. Chapter 6 coversperformance assessment, chapter 7 shows how to identify interactions anddependencies among several factors that explain performance, and chapter8 discusses predictive models of programs, including causal models. Thefinal chapter asks what counts as a theory in AI, and how empiricalmethods -- which deal with specific systems -- can foster generaltheories.

Mathematical details are confined to appendixes and no prior knowledgeof statistics or probability theory is assumed. All of the examples canbe analyzed by hand or with commercially available statistics packages.The Common Lisp Analytical Statistics Package (CLASP), developed in theauthor's laboratory for Unix and Macintosh computers, available from TheMIT Press.

More information on Empirical Methods for Artificial Intelligence

A Bradford Book ... Read more

Customer Reviews (1)

5-0 out of 5 stars Excellent introduction to experimental science
The title of the book could have been easily "Empirical Methods for Computer Science" or even "Empirical Methods for Science."

The goal of the book is to give a gentle but solid introduction into empirical research, experimental science and interpretation of data.

First four chapters are really a must-read for anyone who is interested in empirical methods. In the first chapter "Empirical Research", the author lays the foundations.Chapter two "Exploratory Data Analysis" starts with the fundamentals of statistics of one variable and introduces time series and execution traces.I really loved the "Fitting functions to Data in Scatterplots" subchapter.The introduction continues in the third chapter "Basic Issues in Experimental Design" where we learn about control, spurious effects, sampling bias, dependent variables and pilot experiments.The author gives some nice advices here.Fourth chapter is "Hypothesis Testing and Estimation" and this one concludes the introductory part.

Chapters 5-9 are a little bit more advanced and somewhat biased towards Computer Science and Artificial Intelligence but could be an interesting and refreshing read to anyone who wants to get a solid foundation to experiment design, execution, data collection and interpretation.

The author uses experimental data generated by a system called "Phoenix" (which he codeveloped) as the main example in the book. ... Read more


62. Artificial Intelligence and Literary Creativity: Inside the Mind of Brutus, A Storytelling Machine
by Selmer Bringsjord, David Ferrucci
Paperback: 264 Pages (1999-09-01)
list price: US$36.00 -- used & new: US$32.35
(price subject to change: see help)
Asin: 0805819878
Average Customer Review: 3.5 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
Is human creativity a wall that AI can never scale? Many people are happy to admit that experts in many domains can be matched by either knowledge-based or sub-symbolic systems, but even some AI researchers harbor the hope that when it comes to feats of sheer brilliance, mind over machine is an unalterable fact. In this book, the authors push AI toward a time when machines can autonomously write not just humdrum stories of the sort seen for years in AI, but first-rate fiction thought to be the province of human genius. It reports on five years of effort devoted to building a story generator--the BRUTUS.1 system.

This book was written for three general reasons. The first theoretical reason for investing time, money, and talent in the quest for a truly creative machine is to work toward an answer to the question of whether we ourselves are machines. The second theoretical reason is to silence those who believe that logic is forever closed off from the emotional world of creativity. The practical rationale for this endeavor, and the third reason, is that machines able to work alongside humans in arenas calling for creativity will have incalculable worth.
... Read more

Customer Reviews (7)

2-0 out of 5 stars A few useful ideas, lots of hype
This book discusses the issue of computer programs that can generate stories, with particular emphasis on a program which the authors claim can do so.

The first part of the book discusses philosophical issues regarding artificial intelligence, in attempt to answer the question, Can a computer generate stories which are indistinguishable from human-written stories.

One can see why the authors make modest claims here: if one examines carefully the algorithm presented in the second half of the book, one notices that at certain strategic points the program needs "help," i.e. human intervention.So, humans still have to do the hard part; without this, the program fails.The program can only do the "easy" parts.

Notwithstanding this and the hypey technobabble that permeates the book, this book does present useful research and references on the parts of storytelling that can be automated at the present time, which are significant.

From the back cover: "Computers can play superlative chess, diagnose disease, guide spacecraft, power robots that can deliver mail and (soon) clean hoses, etcetera.But can computers 'originate' anything?Can computers be genuinely creative?This is the toughest question that those sanguine about AI face.This book reports on a multi-year attempt to engineer a blueprint (BRUTUS) for a computer system that can hold its own against literarily creative humans, and on the first incarnation of that blueprint (BRUTUS.1)."






5-0 out of 5 stars A prelude to automated novel writing.
Machines that can summarize documents are commonplace, as well as machines that can extract words and lines from paragraphs and rearrange them to possibly form something useful or interesting. But can a machine write a short story, or even a full-fledged novel with complex characters and themes? That such ability is not only possible for machines but is actually present in some of them is the subject of this book, and if one ignores the philosophical rhetoric on the "strong AI" problem, the authors give a fine overview of their project to create a "story-telling machine", which they have designated as BRUTUS.

The authors claim that their book "marks the marriage of logic and creativity", a claim that will raise the eyebrows of many a philosopher, literary critic, or novelist. But the intuitive dissonance that many in these professions may have regarding the reduction of the free-play of the imagination to the rigors and organization of logic should not dissuade others from believing that such a reduction is not only possible, but has actually been accomplished. Ironically, the authors early in the book assert that there are no examples of machine creativity in the world. Of course, this assertion depends on one's notion of what creativity is, and to what degree this creativity may have depended on the assistance of machines. Machines that create new mathematics, scientific theories, music, or novels do not yet exist, the authors claim, but they do take pains to express their optimism regarding future developments in "machine creativity".

The authors are incorrect in their belief that there are no machines now that can currently develop new and interesting results in a wide variety of different domains. In addition, their notion of intelligence is too anthropomorphic, too tied to what human intelligence is, or is not (and one could argue that machine intelligence is even better understood than human intelligence). The authors though have written a book that gives the reader much insight into what is involved in building creative, thinking machines. Most refreshingly, the authors do not want to settle the question of machine creativity from the comfort of their armchairs, but instead from the laboratory by actually building artificial authors. Philosophical speculation is for the most part eschewed, and is replaced by the rigors and sometimes frustrations of laboratory experiments.

According to the authors, BRUTUS exhibits "weak" creativity rather than "strong", with the latter being compared to the creation ex nihilo, examples of this being non-Euclidean geometry and the Cantor diagonalization method from mathematics. Weak creativity on the other hand, is a more practical notion, and according to the authors is rooted in the "operational" one developed by psychologists. In the development of BRUTUS, the authors wanted to create an automated story generator that satisfied seven requirements: 1. The machine must be competitive with the requirements of strong creativity. 2. The machine must be able to generate imagery in the mind of the reader. 3. The machine must produce stories in a "landscape of consciousness." 4. The machine must be capable of formalizing the concepts at the core of "belletristic" fiction, with the example of "betrayal" being emphasized the most by the authors. 5. The machine must be able to generate stories that a human would find interesting. 6. The machine must be in command of story structures that will give it "immediate standing" in the human audience. 7. The prose developed by the machine must be rich and compelling, not "mechanical". BRUTUS they say meets all of these requirements, but no doubt some critics will think otherwise. The authors do make a sound case for their assertions that it does, and it is the belief of this reviewer that they have, and that BRUTUS is one of first automated story generators. With optimism toward the future developments of BRUTUS and artificial intelligence in general, they state that "a machine able to write a full, formidable novel, or compose a feature-length film, or create and manage the unfolding story in an online game, would be, we suspect, pure gold. "

They are right.

1-0 out of 5 stars Selmer Bringsjord tells tall tales in the guise of logic
Unfortunately, Selmer Bringsjord is very able with the form of logic but not with its substance -- he "proves" false statements and "disproves" true ones. He applies his sophistry vigorously in the service of his anti-computational agenda. But it isn't just a matter of bad faith promotion of an ideology -- true incompetence is involved. Bringsjord is famous for denying a statement that followed from a statement he claimed to be agnostic about and yet not abandoning his agnosticism. When the contradiction was pointed out to him, he wrote a paper in which he "argued" that the claim of a contradiction was fallacious by offering a bogus "inference rule" that supposedly was required, and then showing that the "inference rule" that he himself offered was fallacious. Of course, that one should not hold that not Q and at the same time be agnostic about P, when it is known that P implies Q, is not something that any competent thinker would deny, let alone publish such a paper against, a paper that could be considered the defining example of a straw man argument.

5-0 out of 5 stars cuts across disciplines
Here at Ohio State you just as likely to find this book in hands of a philosopher as a computer scientist.It covers the "big" questions (How smart can computers get?Can they ever be truly creative? etc.), covers the logical and mathematical issues in computational story generation, and also, of course, talks about how the Brutus system was engineered.In sum, I guess the book exemplifies cognitive science.One caveat, though:the authors aggressively take a logic-based approach to AI, and pan non-symbolic (e.g., neural net-based) approaches.If you're not a fan of logic, then you'll probably want to read this book because it's the best challenge going to your point of view.If you're a logic lover, this will be your cup of tea.

5-0 out of 5 stars I'll still have my job!
I expected to find a book that predicts creative writers will soon be replaced by machines, but what I found was -- thank goodness -- the exact opposite.The authors argue that literary fiction cannot possibly bereduced to computation -- but that formulaic fiction may be another story. Brutus is a machine master of formula.Let's just hope that I'm right thatmy own novels (which are mentioned in this book) *are* belletristic! ... Read more


63. Aaron's Code: Meta-Art, Artificial Intelligence and the Work of Harold Cohen
by Pamela McCorduck
 Hardcover: 225 Pages (1990-10)
list price: US$25.95 -- used & new: US$44.05
(price subject to change: see help)
Asin: 0716721732
Average Customer Review: 4.0 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
"Aaron's Code" tells the story of the first profound connection between art and computer technology. Here is the work of Harold Cohen - the renowned abstract painter who, at the height of a celebrated career in the late 1960's, abandoned the international scene of museums and galleries and sequestered himself with the most powerful computers he could get his hands on. What emerged from his long years of solitary struggle is an elaborate computer program that makes drawings autonomously, without human intervention - an electronic apprentice and alter ego called Aaron. ... Read more

Customer Reviews (1)

4-0 out of 5 stars Excellent source of information about Aaron
This is a very good book. McCorduck is an excellent writer (I alsorecommend her "Machines Who Think") who tackles the fascinatingarea of AI art. Information about the art-generating program Aaron is hardto come by and this book is the best source I've encountered so far. Thehistory of Harold Cohen is a fascinating read.

I would have liked tohave seen coverage of more AI art makers, but perhaps Aaron is the only oneworth considering. For more on AI art in general, see Boden's "TheCreative Mind."

Way to go Pamela! ... Read more


64. Mind, Machine, and Metaphor: An Essay on Artificial Intelligence and Legal Reasoning (New Perspectives on Law, Culture, and Society)
by Alexander E. Silverman
 Hardcover: 145 Pages (1993-04)
list price: US$66.00
Isbn: 081338575X
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
An essay on artificial intelligence and legal reasoning. ... Read more


65. Safe and Sound: Artificial Intelligence in Hazardous Applications
by John Fox, Subrata Das
Hardcover: 325 Pages (2000-07-07)
list price: US$48.00 -- used & new: US$33.49
(price subject to change: see help)
Asin: 0262062119
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
Computer science and artificial intelligence are increasingly used in the hazardous and uncertain realms of medical decision making, where small faults or errors can spell human catastrophe. This book describes, from both practical and theoretical perspectives, an AI technology for supporting sound clinical decision making and safe patient management. The technology combines techniques from conventional software engineering with a systematic method for building intelligent agents. Although the focus is on medicine, many of the ideas can be applied to AI systems in other hazardous settings. The book also covers a number of general AI problems, including knowledge representation and expertise modeling, reasoning and decision making under uncertainty, planning and scheduling, and the design and implementation of intelligent agents.The book, written in an informal style, begins with the medical background and motivations, technical challenges, and proposed solutions. It then turns to a wide-ranging discussion of intelligent and autonomous agents, with particular reference to safety and hazard management. The final section provides a detailed discussion of the knowledge representation and other aspects of the agent model developed in the book, along with a formal logical semantics for the language. ... Read more


66. Collective Intelligence in Action
by Satnam Alag
Paperback: 425 Pages (2008-10-17)
list price: US$44.99 -- used & new: US$28.21
(price subject to change: see help)
Asin: 1933988312
Average Customer Review: 4.5 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description

There's a great deal of wisdom in a crowd, but how do you listen to a thousand people talking at once? Identifying the wants, needs, and knowledge of internet users can be like listening to a mob.

In the Web 2.0 era, leveraging the collective power of user contributions, interactions, and feedback is the key to market dominance. A new category of powerful programming techniques lets you discover the patterns, inter-relationships, and individual profiles-the collective intelligence--locked in the data people leave behind as they surf websites, post blogs, and interact with other users.

Collective Intelligence in Action is a hands-on guidebook for implementing collective intelligence concepts using Java. It is the first Java-based book to emphasize the underlying algorithms and technical implementation of vital data gathering and mining techniques like analyzing trends, discovering relationships, and making predictions. It provides a pragmatic approach to personalization by combining content-based analysis with collaborative approaches.

This book is for Java developers implementing Collective Intelligence in real, high-use applications. Following a running example in which you harvest and use information from blogs, you learn to develop software that you can embed in your own applications. The code examples are immediately reusable and give the Java developer a working collective intelligence toolkit.

Along the way, you work with, a number of APIs and open-source toolkits including text analysis and search using Lucene, web-crawling using Nutch, and applying machine learning algorithms using WEKA and the Java Data Mining (JDM) standard.

... Read more

Customer Reviews (21)

4-0 out of 5 stars Good introductory book
This book is a great introduction to CI for people who are starting out. The coverage is broad rather than deep - along with the expected theoretical background on Term-Document Vectors, similarity computations using Cosine similarity and Pearson correlation, etc, it also introduces software that CI programmers are likely to find useful, for example, Nutch for web-crawling, Lucene for text tokenization and search/indexing, Weka for data mining, etc, although you would have to spend some time with these tools by yourself if you don't already know them.

For people who have been working on CI for a while, this book provides great insights on how to use the various concepts to areas such as clustering, classification and building predictive models, and recipes to translate item and user data into meaningful information. Depending on your previous experience though, you may find certain sections of the book redundant.

If I have a complaint, it would be the rather verbose Java code that accompanies the various recipes. The code is written with best practices in mind, so while it is probably directly copy-pasteable into your own code, it is harder to read and takes a bit more time to understand than similar pseudo-code (or code written with readability as its primary objective).

Overall, a very practical and informative book that I think would be useful to both new and experienced CI programmers alike.

2-0 out of 5 stars A lot of ideas, but neither theoretical enough nor pratical enough
This book contains a lot of ideas and as such is a good starting point for further reading.But it's not a one-stop resource for actually implementing the algorithms it mentions, as a lot of them are described only in a very high level and incomplete way.For example, in the discussion of model-based recommendation engines in sections 12.3.3-12.3.5, the author gives a very short description of latent semantic indexing (LSI) and some Java code that shows how to use the Weka implementation.But firstly, the description is too short to give the user a real understanding of what is going on theoretically.And secondly, the implementation description doesn't go nearly far enough:it shows that reconstructing the original matrix from the top N dimensions of the singular value decomposition gives a close approximation to the original, but then it just stops there; it doesn't explain how to actually use the decomposition in a recommendation engine.And the section on LSI is verbose compared to the "section" on Bayesian belief networks, which at a single paragraph of text is completely inadequate for either practical or theoretical purposes.And so on throughout the book.

5-0 out of 5 stars It's a must read..
I'm a start-up CEO, who's had 3 of my engineers review this book. Unanimously, they came back raving about how much they picked up from the book and hence how much time they saved. If you manage any technical resources and are interested in this area, buy a copy for each of your developers, it will save you and your team a lot of time and effort.

5-0 out of 5 stars Fascinating book about how Web 2.0 sites work.
To really understand this book one would probably have to be a Java programmer, which I'm not, but I was able to follow the argumentation. I do have some background with data mining using SAS and SQL and the mathematics described are fairly easy to understand for someone with even a 1st year engineering or applied math background.I also have an interest in linguistics which kept me going.

The basic idea is that one can catalog documents by removing irrelevant words (adjectives, abstract pronouns, conjunctives) and "stemming" the remaining words (ie: reducing "sews", "sewing", "resew", "sewer" to a root "sew") and creating a vector containing each root word and the word frequency and then normalizing it.One simple result is the ability to produce "word clouds".Similarity between documents is measured by taking the dot product of the two vectors. Any document compared to itself would have a dot product of 1. Two documents with no common stem words would have a dot product of zero.Similar docs would have a high value close to 1, say .8. Dissimilar docs would have a low coefficient, say .15. Even mistaking "sewer" (a conduit for waste) and sewer (one who uses a needle and thread) is taken into account because both docs would only be similar on a couple of keywords, and dissimilar on most others.

What's really neat is how this information gets collected and can be applied. Social networking sites, including the one you are reading right now, Amazon.com, collect data on us through our choices. Browse for a book while logged on then that's something you are interested in. Approve a review the words in the review, summary of the book and the title counts towards your interests.Disapprove and that counts against your interests.Write a review and the words you write become part of your cumulative profile as well, reduced to a vector or vectors of keywords and frequencies.

Here's how it gets applied:One of Amazon's marketing tools is it's "recommendation engine". (The book talks about Netflix recommendation engine and business model).By matching your vector against other people who have bought/viewed what you have bought a prediction can be made as to the likelihood of you being interested in the something that they have bought, or not interested in items that they rejected or disliked.The more Amazon caters to what you are interested in, and doesn't bother you with irrelevancies, the happier you may be.

Other applications discussed include the automatic creation of folksonomies (taxonomies based on popular usage) using cluster analysis and categorization using Bayes theorem.

In addition to recommendation engines Alag points out the usefulness of these techniques to Search and points out several search engines that apply this approach (as does Google),tools that search out and provide news based on your preferences, or suggest "friends" (ie: Facebook or eHarmony might use these ideas), search for similar material to identify copyright infringement, email filters that keep out spam for rolex watches or viagra (unless you are interested in rolex watches or viagra), construct a virus detection engine based on code phrases or early detection of epidemics or adverse reactions to medication through similarities in medical reports.Alag himself appears to be working at a biotech firm NextBio that matches public medical and genome related data to data held by private companies.

Some of the basic tools discussed are Lucene, a free version of what Google will sell you for a search engine, Nutch, a free web crawler, both of which require coding and WEKA, a free open source data mining package that looks usable by the rest of us.

Loved the book and the author's organization of the material.Some of the social implications are scary, especially for privacy concerns, but so is the implication of not leveraging the information that one holds within your organization to provide the best possible service. For example the World Bank has the capability (not necessarily using these methods) to match similar projects around the world so that experience gained in one area can be found and applied elsewhere. This is a key fast moving tech that one needs to understand in order to see where we are going as a society.C.I. in Action is merely the opening salvo - the methods and techniques described are the basics but there is much room for refinement and elaboration and this topic could be the start of a whole new field. The book also recommends and has sparked my interest in the site [...] which is probably more accessible to someone without a math or tech background.

Finally a note to SF fans, esp. of Spider Robinson's Callahan's Crosstime Saloon series, this may be the point at which the Web starts to appear to be intelligent. :-)

3-0 out of 5 stars Collective Intelligence in Action
This book is more deserving of the "Collective Intelligence" title than O'Reilly's "Programming Collective Intelligence" as it's not just about algorithms, but discusses blogs, wikis etc, and shows how to do basic implementations of features such as tag clouds or finding related content in that context. Instead of explaining specific algorithms in detail, existing Java libraries are used, e.g. WEKA for data mining and Lucene for search.

There are lots of diagrams, and (somewhat verbose) Java code. The examples in this book are good starting points for further exploration; this book is more about showing what can be done and getting you started in the right direction than providing you with an understanding of the algorithms (as does the O'Reilly book) and libraries that are used.
... Read more


67. The Philosophy of Artificial Intelligence (Oxford Readings in Philosophy)
Paperback: 464 Pages (1990-07-12)
list price: US$50.00 -- used & new: US$17.98
(price subject to change: see help)
Asin: 0198248547
Average Customer Review: 5.0 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
This interdisciplinary collection of classical and contemporary readings provides a clear and comprehensive guide to the many hotly-debated philosophical issues at the heart of artificial intelligence.Amazon.com Review
A collection of classic articles from the field of artificialintelligence (AI), The Philosophy of Artificial Intelligencewould be a good complement to an introductory textbook on AIfundamentals. The back cover of the book states that the material isintended for the university student or general reader, but don't befooled. Unless you are a student in a supportive class setting or ageneral reader who happens to have a degree in engineering, you arelikely to find the content difficult. The first chapter, for example,assumes knowledge of calculus. However, if you have the rightpreparation, you'll be treated to fifteen important papers inAI--including Alan Turing's Computing Machinery andIntelligence article, which proposed the now well-known Turingtest for determining whether a machine is intelligent. ... Read more

Customer Reviews (1)

5-0 out of 5 stars Programmers should start here.
This book is a very good collection of classic papers that you should read before you even think of posting to comp.ai or any of the other AI related groups. If you're interested in AI (or anything for that matter), you need to look to the field's recognized philosophy before you do anything else. ... Read more


68. Computational Intelligence: Principles, Techniques and Applications
by Amit Konar
Hardcover: 708 Pages (2005-05-31)
list price: US$149.00 -- used & new: US$79.26
(price subject to change: see help)
Asin: 3540208984
Average Customer Review: 5.0 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description

Computational Intelligence: Principles, Techniques and Applications presents both theories and applications of computational intelligence in a clear, precise and highly comprehensive style. The textbook addresses the fundamental aspects of fuzzy sets and logic, neural networks, evolutionary computing and belief networks. The application areas include fuzzy databases, fuzzy control, image understanding, expert systems, object recognition, criminal investigation, telecommunication networks, and intelligent robots. The book contains many numerical examples and homework problems with sufficient hints so that the students can solve them on their own. A CD-ROM containing the simulations is supplied with the book, to enable interested readers to develop their own application programs with the supplied C/ C++ toolbox.

... Read more

Customer Reviews (1)

5-0 out of 5 stars Computational Intelligence: Principles, Techniques and Applications
Excellent book. I highly recommend it. Although Computational Intelligence could include almost any subject, this book is a comprehensive review of the most agreed-upon paradigms in CI. Haven't checked out the CD that comes with the book, so I cannot comment about it. ... Read more


69. Readings in Distributed Artificial Intelligence
by Alan H. Bond
 Paperback: 649 Pages (1988-08)
list price: US$58.00
Isbn: 093461363X
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
Over 50 key research articles are reprinted with a sixty-page historical and conceptual survey of the field and sectionoverviews by the editors. Annotat copyright Book News, Inc.Portland, Or. ... Read more


70. Artificial Dreams: The Quest for Non-Biological Intelligence
by H. R. Ekbia
Hardcover: 416 Pages (2008-04-28)
list price: US$86.99 -- used & new: US$54.28
(price subject to change: see help)
Asin: 0521878675
Average Customer Review: 3.5 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
This book is a critique of Artificial Intelligence (AI) from the perspective of cognitive science - it seeks to examine what we have learned about human cognition from AI successes and failures. The book's goal is to separate those "AI dreams" that either have been or could be realized from those that are constructed through discourse and are unrealizable. AI research has advanced many areas that are intellectually compelling and holds great promise for advances in science, engineering, and practical systems. After the 1980s, however, the field has often struggled to deliver widely on these promises. This book breaks new ground by analyzing how some of the driving dreams of people practicing AI research become valued contributions, while others devolve into unrealized and unrealizable projects. ... Read more

Customer Reviews (2)

4-0 out of 5 stars AI is Not What it Seems
Well, at least, that's what Ekbia's position seems to be. He focuses on the fine differences between "true" and "artificial" (if at all) intelligence. This book is not a technical tome and makes for relatively easy reading. However, it would help if the reader was already somewhat familiar with basic AI approaches (in which the book's appendices help). Ekbia discusses computer chess (e.g. Deep Blue), case-based reasoning (e.g. Coach), artificial commonsense (e.g. Cyc), "emotional" robots (e.g. Kismet) and a selection of other examples which provide a good overview of the different perspectives both the public and researchers have about AI accomplishments. Some of Ekbia's arguments are difficult to argue against. For example, it is true that some researchers overstate their case in terms of just how "intelligent" or significant a particular approach is. Computers, after all, work in a mechanical fashion and have no real conception about the things they are working with. In many cases, this is obvious once you look "under the hood"; but in others, it is possibly just a matter of perspective. Take chess, for example. While the brute-force approach seems to work well and is purely mechanical, we cannot overlook the significance of the heuristic evaluation functions which are equally important. These are usually specifically designed by humans. Not to mention that this combination has resulted in programs running on desktop machines that can today outplay even grandmasters. In fairness, Ekbia does not trivialize this type of "success" in AI but suggests, rightfully, that we have perhaps just found a different approach to "thinking" in chess, and chess alone. But this is how it is in AI. The approach used in chess is neither required nor expected (at least these days) to be directly applicable to other areas with equal effectiveness. This, however, does not mean that at least some of the work done in that domain has not been useful in other domains or fields of research.

Most of the book, including some gems in the footnotes at the back, hover around the point that we are somehow missing something in AI that would put us on the "right path", and that we are, at least, approaching this path slowly, perhaps without even realizing it. With a rich and colorful history behind AI, its future is unlikely to suffer from exactly the same mistakes despite the necessary evil or growing temptation faced by researchers to somewhat mislead industry-related benefactors into thinking they are financing something truly significant. I found myself generally sobering up to Ekbia's insights into AI and learning of happenings in the field that I was previously unaware of myself. Many books on AI will likely come off as highly technical and complicated (a lot of math is usually involved) but this one takes a "higher level" or philosophical approach which, I now think, should not be neglected even in undergraduate study of the field. One should, however, be careful not to give undue reverence to the idea of simply "being human" just because of the current shortcomings in AI. I am nevertheless certainly glad I made it a point to read the book while waiting for my viva voce.

3-0 out of 5 stars Falls short in its criticism
If one referred to program lists or algorithm steps as a "reasoning pattern" and the program itself as a "cognitive structure" the author of this book would be taken aghast, and might take this as another example of what he refers to as the "generalized Eliza effect" throughout the book. The latter refers to a program, called by its creator "Eliza" that was designed to emulate the psychotherapeutic skills of a professional psychologist. Developed in the 1960's by Joseph Weizenbaum and tested on a collection of unsuspecting "patients", it apparently was able to fool some of these into believing its advice was genuine, or expert psychological counseling.The "patients" questions were recast in such a way as to create the illusion that the program had genuine understanding of psychotherapy and could offer them therapeutic assistance. But the Eliza responses were merely canned phrases, as was revealed by the patient "patient" who took the time to question it for several minutes.

The author uses this program as a paradigm for his main case against the reality of machine intelligence, viewing the program as an excellent example of the false imputation of intelligence to a machine. He gives many other examples throughout the book, all of them being quite familiar to those readers who follow the field of artificial intelligence (AI) or who are active participants in research thereof. As a whole the book is interesting, mostly due to the detail that the author brings to the history of AI and the discussions of some of the attempts to bring about machine intelligence.

However the author's case against AI is incredibly weak, being non-constructive in its strategy and actually being one of many critiques of AI that fall victim to what this reviewer has dubbed the "Michie-McCorduck-Boden effect." This effect, kind of an inverse of the Eliza effect, summarizes the peculiarities and crises of confidence that have dogged research in AI since its inception in the early 1950's. The following quotation from the writer Brian R. Gaines encapsulates it beautifully:

"From the earliest days of AI pioneers such as Donald Michie have noted that an intrinsic feature of the field is that problems are posed such that all those involved accept that any solution must involve `artificial intelligence' but, when the solution is developed and the basis for it is clear, the resultant technology is assimilated into standard information processing and no longer regarded as `intelligent' in any deep sense. When the magician shows you how the trick was done the `magic' vanishes. Much of what has been developed through AI research has diffused in this way into routine information technology: the Michie effect."

Other authors, AI historians, and researchers have made similar commentary as to the nature and progress in the field of AI. In particular the AI historian Pamela McCorduck and the cognitive scientist Margaret Boden have discussed this phenomenon at length. It could thus be referred to as the Michie-McCorduck-Boden effect, and it has huge consequences for general acceptance of machine intelligence, especially for specific views of whether or not a machine is exhibiting intelligence.

The Michie-McCorduck-Boden effect can be considered an inverse "Eliza" effect (as the author describes the latter) in that those who fall under its spell are quick to impute non-intelligence to machines as soon as they uncover "the method behind the magic." The author does this several times in this book: in his criticism of connectionism, the Cyc project, and Deep Blue. Once he discovers the processes or algorithms that each of these "programs" uses, he points out their shortcomings in semantics and a notion of "meaning" that he never really explains to the reader. But he still wants to describe human cognition as "intelligent", which he refers to as a "complex, multifaceted, and multilevel phenomenon."

But is human cognition the way he describes it? And once it is "unraveled" as he puts it, will it in turn be delegated to a trivial collection of processes in the same way as chess programs and natural language processors (e.g. Cyc) have been? If historical trends are to be followed with respect to the science of human cognition as they were in research in AI, there is every reason to believe that once the "method behind the magic" of human cognition is discovered it will trivialized in just the way that machine processes are. Will the notion of "intelligence" then fade from scientific discourse, both in machines and humans? Maybe.

The book is thus full of examples of projects that fall short if judged from true intelligence or "meaningful" knowledge as the author discusses it (and he does so with the admission that the dividing line between information and knowledge is too "fuzzy"). But what is so deeply troubling is not the vagueness in which the author addresses these issues, but rather the insistence that the AI projects such as Cyc and case-based reasoning must be complete or all-encompassing before we can regard them as intelligent. If Cyc gets bogged down in a question-answer session for a particular domain it must rejected the author seems to argue. He forgets that even human experts in particular domains, like physics for example, typically make mistakes in their scientific narratives, and we certainly don't want to reject their expertise outright because of a few blunders on their part. A reasonable outlook on the projects discussed in the book would consist of estimating the risk that the user takes on when using machines that deploy Cyc, case-based reasoning, or connectionism. Such machines will not be "fool-proof and incapable of error" and their solutions to problems or answers to questions may seem foolish or incomplete at times. But such is the nature of intelligence, and insisting otherwise puts unreasonable expectations on machines (or humans for that matter).

This reviewer therefore disagrees strongly with the author's conclusions, but does agree that one must emphasize both the practical applications of AI as well as the theories and formal constructions behind these applications. One must also step beyond the media and advertising hype, the overindulgences of Hollywood movies and rapid-fire press releases, and give an honest and objective assessment of the status of AI as it exists at the present time. Therefore "futurism", unjustified optimism, and wishful thinking should be carefully guarded against. On the other hand great care should be taken to distinguish skepticism from cynicism, and there should be no hesitation from expressing emotion when contemplating genuine discoveries in machine intelligence. We must not be guarded in our enthusiasm in this regard.

The field of artificial intelligence is a healthy one, and delivers practical technology, with its influence rapidly increasing in the twenty-first century. For better or worse, and in spite of the tremendous social changes that AI might cause, from a rigorous and careful study of the evidence, we can turn Hollywood on its head and use one of its movie titles with pleasure: we can say with confidence that we are entering a world of the silicon geniuses; a world of the avatars. We are witnessing the rise of the machines. ... Read more


71. Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability (Texts in Theoretical Computer Science. An EATCS Series)
by Marcus Hutter
Paperback: 280 Pages (2010-11-02)
list price: US$109.00 -- used & new: US$72.71
(price subject to change: see help)
Asin: 3642060528
Average Customer Review: 4.0 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description

This book presents sequential decision theory from a novel algorithmic information theory perspective. While the former is suited for active agents in known environments, the latter is suited for passive prediction in unknown environments. The book introduces these two different ideas and removes the limitations by unifying them to one parameter-free theory of an optimal reinforcement learning agent embedded in an unknown environment. Most AI problems can easily be formulated within this theory, reducing the conceptual problems to pure computational ones. Considered problem classes include sequence prediction, strategic games, function minimization, reinforcement and supervised learning. The discussion includes formal definitions of intelligence order relations, the horizon problem and relations to other approaches. One intention of this book is to excite a broader AI audience about abstract algorithmic information theory concepts, and conversely to inform theorists about exciting applications to AI.

... Read more

Customer Reviews (6)

4-0 out of 5 stars A gem under a pile of unnecessary mathematical obfuscation
This is probably the most rigorous attempt to formalize AI. The book
succeeds in presenting the state-of-the-art AI theory from a technical
point of view, but neglects intuition, and is difficult to read for the novice and thus inaccessible to a wider audience. I will try to be explain the ideas in a less technical manner in this review.

The main idea of the book in combining classical control theory
concepts with Bayesian inference and algorithmic information theory.
The author avoids to struggle with anthropocentric aspects of
intelligence (which are subject to a fierce debate that is sometimes
of metaphysical nature) by defining intelligent agents as utility
maximizing-systems. The core ideas are, in a nutshell:

1) Goal: Build a system with an I/O stream interfaced with an
environment, where inputs are observations and outputs are actions,
that optimizes some cumulative reward function over the observations.
Two ingredients are necessary: model the a priori unknown environment
and solve for the reward-maximizing actions.

2) Model: This is a probability distribution over future observations
conditioned on the past (actions and observations). Instead of using
any particular domain-specific model, he uses a weighted average of
"all" models. By "all models", the set of all mechanically calculable
models is meant, i.e. the set of all algorithmically approximable
probability models.

3) Policy: Given the model, all possible futures can be simulated (up
to a predefined horizon) by trying out all possible interaction paths.
Essentially, a huge decision tree is constructed. Having this
information, it is easy to solve for the best policy. Just pick at
each step the action that promises the highest expected future reward.
This is done using Bellman's optimality equations.

Why does this theoretically work? If the environment is equal to one
of the sub-models in the mixture, then the combined model's posterior
estimation converges to the environment. The model is updated step by
step using Bayes' rule. Since the model becomes more accurate, the
policy based on it converges to the optimum. This is certainly very
impressive. Algorithmic information theory is the main tool to derive
the theory. Most convergence results depend on the complexities of the
models.

Does it work in practice? Unfortunately, the presented solution cannot
be implemented in practice since it is incomputable. Even worse, there
is at the moment no principled way to downscale his approach (and make
it practical), since we don't know how to simplify (a) model the
mixture and (b) the computation of the policy. The author makes these
points very clear in his book. I believe that these are the main
challenges for future AI research.

The PROs: This is the first time I see a unified, formal and
mathematically sound presentation of artificial intelligence. The
proposed theoretical solution provides invaluable insight about the
nature of learning and acting-hidden even in very subtle details in
his approach and in his equations. Whereas you might feel that
classical AI or commonplace Machine Learning theory looks like a
patchwork of interesting concepts and methods, here (almost)
everything fits nicely together into a coherent and elegant solution.
Once you have studied and understood this book (which is taking years
in my case), it is very difficult to go back to the traditional
approaches of AI.

The CONTRAs: I have however some critiques against this book. Hutter
is a brilliant mathematician and sharp thinker. Unfortunately his
writing style is very formal and he sometimes neglects intuition. The book introduces difficult notation (although some of it pays off in the
long run) that ends up obfuscating simple ideas. The overly
mathematical style has certainly not helped to the spread of
the proposal.

To summarize, this books represents a giant leap in the theory of AI.
If you have advanced mathematical training and enough patience to
study it, then this book is for you. For the more practically-oriented
researcher in AI, I recommend waiting more time until a more user-friendly version of this book is published.

4-0 out of 5 stars Axiomatic Artificial Intelligence Theories
Hutter's book is the most recent attempt to put artificial intelligence on a firm mathematical footing. (For an earlier effort see, for instance,
Theory of Problem Solving: An Approach to Artificial Intelligence, Ranan Banerji, Elsevier, 1969) If successful, such a foundation would permit us to elaborate and explore intelligence by applying the formal methods of mathematics (e.g., theorem proving).

Hutter starts from Werbos' definition of intelligence: "a system to handle all of the calculations from crude inputs through to overt actions in an adaptive way so as to maximize some measure of performance over time" which seems reasonable. (P. J. Werbos, IEEE Trans. Systems, Man, and Cybernetics, 1987, pg 7)

Finding all of the proper axioms for such a mathematical theory of intelligence is still an open and difficult problem, however. Hutter places great stock in Occam's razor.But there is experimental evidence that Occam's razor is incorrect. (The Myth of Simplicity, M. Bunge, Prentice-Hall, 1963) See also, Machine Learning, Tom Mitchell, McGraw-Hill, 1997, pg 65-66.Rather than saying that nature IS simple I believe that it is more correct to say that we are forced to approximate nature with simple models because "our" (both human and AI) memory and processing power is limited.

I am also unsure that we should assume a scalar utility. In Theory of Games and Economic Behavior (Princeton U. Press, 1944, pg 19-20) von Neumann and Morgenstern said: "We have conceded that one may doubt whether a person can always decide which of two alternatives...he prefers...It leads to what may be described as a many-dimensional vector concept of utility."Vector utility (value pluralism) has been employed in AI in my Asa H system (Trans. Kansas Academy of Science, vol. 109, # 3/4, pg 159, 2006)

I suppose, then, that I object to the word "Universal" in Hutter's title.
I think that he is exploring only one kind of intelligence and that there are others.

4-0 out of 5 stars Very ambitious project.
This book differs from most books on the theoretical formulations of artificial intelligence in that it attempts to give a more rigorous accounting of machine learning and to rank machines according to their intelligence. To accomplish this ranking, the author introduces a concept called `universal artificial intelligence,' which is constructed in the context of algorithmic information theory. In fact, the book could be considered to be a formulation of artificial intelligence from the standpoint of algorithmic information theory, and is strongly dependent on such notions as Kolmogorov complexity, the Solomonoff universal prior, Martin-Lof random sequences and Occam's razor. These are all straightforward mathematical concepts with which to work with, the only issue for researchers being their efficacy in giving a useful notion of machine intelligence.

The author begins the book with a "short tour" of what will be discussed in the book, and this serves as helpful motivation for the reader. The reader is expected to have a background in algorithmic information theory, but the author does give a brief review of it in chapter two. In addition, a background in sequential decision theory and control theory would allow a deeper appreciation of the author's approach. In chapter four, he even gives a dictionary that maps concepts in artificial intelligence to those in control theory. For example, an `agent' in AI is a `controller' in control theory, a `belief state' in AI is an `information state' in control theory, and `temporal difference learning' in AI is `dynamic programming' or `value/policy iteration' in control theory. Most interestingly, this mapping illustrates the idea that notions of learning, exploration, adaptation, that one views as "intelligent" can be given interpretations that one does not normally view as intelligent. The re-interpretation of `intelligent' concepts as `unintelligent' ones is typical in the history of AI and is no doubt responsible for the belief that machine intelligence has not yet been achieved.

The author's formulations are very dependent on the notion of Occam's razor with its emphasis on simple explanations. The measurement of complexity that is used in algorithmic information theory is that of Kolmogorov complexity, which one can use to measure the a prior plausibility of a particular string of symbols. The author though wants to use the `Solomonoff universal prior', which is defined as the probability that the output of a universal Turing machine starts with the string when presented with fair coin tosses on the input tape. As the author points out, this quantity is however not a probability measure, but only a `semimeasure', since it is not normalized to 1, but he shows how to bound it by expressions involving the Kolmogorov complexity.

The author also makes use of the agent model, but where now the agent is assumed to be acting in a probabilistic environment, with which it is undergoing a series of cycles. In the k-th cycle, the agent performs an action, which then results in a perception, and the (k+1)-th cycle then begins. The goal of the agent is to maximize future rewards, which are provided by the environment. The author then studies the case where the probability distribution of the environment is known, in order to motivate the notion of a `universal algorithmic agent (AIXI).' This type of agent does not attempt to learn the true probability distribution of the environment, but instead replaces it by a generalized universal prior that converges to it. This prior is a generalization of the Solomonoff universal prior and involves taking a weighted sum over all environments (programs) that give a certain output given the history of a particular sequence presented to it. The AIXI system is uniquely defined by the universal prior and the relation specifying its outputs. The author is careful to point out that the output relation is dependent on the lifespan or initial horizon of the agent. Other than this dependence the AIXI machine is a system that does not have any adjustable parameters.

The author's approach is very ambitious, for he attempts to define when an agent or machine could be considered to be `universally optimal.' Such a machine would be able to find the solution to any problem (with the assumption that it is indeed solvable) and be able to learn any task (with the assumption that it is learnable). The process or program by which the machine does this is `optimal' in the sense that no other program can solve or learn significantly faster than it can. The machine is `universal' in that it is independent of the true environment, and thus can function in any domain. This means that a universal optimal machine could perform financial time series prediction as well as discover and prove new results in mathematics, and do so better than any other machine. The notion of a universally optimal machine is useful in the author's view since it allows the construction of an `intelligence order relation' on the "policies" of a machine. A policy is thought of as a program that takes information and delivers it to the environment. A policy p is `more intelligent' than a policy p' if p delivers a higher expected reward than p'.

The author is aware that his constructions need justification from current practices in AI if they are to be useful. He therefore gives several examples dealing with game playing, sequence prediction, function minimization, and reinforcement and supervised learning as evidence of the power of his approach. These examples are all interesting in the abstract, but if his approach is to be fruitful in practice it is imperative that he give explicit recommendations on how to construct a policy that would allow a machine to be as universal and optimal (realistically) as he defines it (formally) in the book. Even more problematic though would be the awesome task of checking (proving) whether a policy is indeed universally optimal. This might be even more difficult than the actual construction of the policy itself.

5-0 out of 5 stars The State of the Art as it Exists Today
Artificial Intelligence has proven to be one of those elusive holy grails of computing. Playing chess (very, very well) has proven possible, while driving a car or surviving in the wilderness is a long, long way from possible. Even the definition of these problems has proven impossible.

This book first makes the assumption that unlimited computational resources are available, and then proceeds to develop a universal theory of decision making to derive a rational reinforcement learning agent.

Even this approach is incomputable and impossible to implement. Chapter 7 presents a modified approach that will reduce the computational requirements, although they remain huge.

Chapter 8 summarizes the assumptions, problems, limitations, performance of this approach, and concludes with some less technical remarks on various philosophical issues.

This is a highly theoretical book that describes the state of the art in AI approaches. Each chapter concludes with a series of problems which vary from "Very Easy, solvable from the top of your head" to "If you can solve this you should publish it in the professional literature."

This is the state of the art as it exists today.

4-0 out of 5 stars Theoretical universal AI
Solomonoff's famous inference model solves the inductive learning problem in a universal and provably very powerful way.Many methods from statistics (maximum likelihood, maximum entropy, minimum description length...) can be shown to be special cases of the model described by Solomonoff.However Solomonoff Induction has two significant shortcomings: Firstly it is not computable, and secondly it only deals with passive environments.Although many problems can be formulated in terms of sequence prediction (for example categorisation), in AI in general an agent must be able to deal with an active environment where the agent's decisions affect the future state of the environment.

In essence, the AIXI model, the main topic of this book, is an extension of Solomonoff Induction to this much more general space of active environments.Although the model itself is very simple (it is really just Solomonoff's model with an expectimax tree added to examine the potential consequences of the agent's actions) the resulting analysis is now more difficult than in the passive case. While optimality can be show in certain senses, the powerful convergence bounds that Solomonoff induction has now appear to be difficult to establish.

Like Solomonoff induction, AIXI also suffers from computability problems.In the one of the final sections a modified version of AIXI is presented which is shown to be computable and optimal in some sense.Practically this algorithm would be much too slow, but this is a clear step away from abstract models which can in theory be implemented.

For anybody interested in universal theories of artificial intelligence this book is a must.The presentation is quite technical in places and thus the reader should have some understanding of theoretical computer science, statistics and Kolmogorov complexity. ... Read more


72. Artificial General Intelligence (Cognitive Technologies)
Paperback: 509 Pages (2009-12-15)
list price: US$109.00 -- used & new: US$72.71
(price subject to change: see help)
Asin: 3642062679
Average Customer Review: 4.0 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description

This is the first book on current research on artificial general intelligence (AGI), work explicitly focused on engineering general intelligence – autonomous, self-reflective, self-improving, commonsensical intelligence. Each author explains a specific aspect of AGI in detail in each chapter, while also investigating the common themes in the work of diverse groups, and posing the big, open questions in this vital area.

This book will be of interest to researchers and students who require a coherent treatment of AGI and the relationships between AI and related fields such as physics, philosophy, neuroscience, linguistics, psychology, biology, sociology, anthropology and engineering.

... Read more

Customer Reviews (2)

3-0 out of 5 stars A review of Artificial General Intelligence
If you are interested in human-level artificial intelligence you probably should own this book.I liked reading the book and am glad I own it but there are criticisms.Most of the book is too qualitative.Even where prototype software has been deployed algorithms are not given, even in pseudocode.Too much of the book is speculation.I also think that too little attention has been paid to the control of complexity.

5-0 out of 5 stars Introduction to the most ambitious projects ever undertaken in the history of technology
In the past five decades, the field of artificial intelligence has made significant progress, some of which can be characterized as radical departures with the past, while some as steady progress built on preconceived ideas. In general, progress in any field of endeavor is recognized by the participants and by the observers thereof, but this has not been the case in artificial intelligence. With few exceptions anytime an advance is made in this field it is at first greeted with a great deal of enthusiasm, and the algorithms it deploys are viewed as "intelligent." After some time however (and this time is relatively short) the algorithms are "understood" and are then designated as mere "programs" that certainly cannot be considered as intelligent. The "advance" finds its place in history as "trivial", and certainly not to be given any further consideration as "intelligent". Consequently, intelligent machines are always considered to be just beyond the horizon, as a goal to be achieved when better technology and algorithms are available.

But again, progress has been made in artificial intelligence: there are intelligent machines and they are used quite extensively in business and industry. But these machines are limited if one judges them from the standpoint of what is possible using human intelligence. The algorithms, or reasoning patterns that they deploy, are limited to working in a specific domain, such as finance, radiology, or network engineering. Human intelligence on the contrary can function in many different domains: a good chess player can also be a good musician or a good architect. Of course one can easily place algorithms in a particular machine each one of which has expertise in a particular domain, but they cannot cross over from one domain to another without considerable alteration from the designer or specialist. And any change in one domain-specific algorithm or reasoning pattern will not effect the efficacy of another algorithm or reasoning pattern with expertise in a different domain. To make an analogy with what is often discussed in the field of cognitive science, the machines of today thus have "modularized" intelligence: the modules or "programs" or "software" are designed to "think" in a certain domain or perform tasks restricted to certain domains.

There are a few in the artificial intelligence community that believe that genuine machine intelligence must at least be domain-independent, along with exhibiting curiosity and an ability to adapt to radically new situations. Such intelligence, in analogy with the human case, must be general enough to deal with situations, challenges, and contexts that are not tied to one domain. This has been called 'artificial general intelligence' (AGI) and is the subject of this book. It is a collection of articles by some of the individuals who have been actively involved in AGI and are working hard to bring it to fruition. The challenges in doing this are enormous, due in part to the paucity in funding for such endeavors, but due mostly to the conceptual difficulties involved in constructing reasoning patterns that can operate in many different domains without the assistance of the human engineer/designer. Suffice it to say that the goals that are discussed in this book represent the most ambitious projects ever attempted in the history of technology.

To assess or monitor the progress in AGI requires that one have at least a working definition of intelligence, and in the article by Pei Wang entitled "The Logic of Intelligence" this requirement is articulated clearly, albeit in a more general context. Wang asks whether there is an "essence of intelligence" that distinguishes intelligent entities from non-intelligent ones. His question is an interesting one since answering it will be necessary if one, again, is to gauge the progress in AGI. If the boundary between non-intelligent and intelligent systems is ill-defined then making claims regarding the status of AGI would be unfounded. But the definition of intelligence must also be one that is fruitful in a practical sense, since if AGI is to be successful it must have wide application in business, industry, and education. Wang settles on a "working" definition of intelligence, which he regards as a definition that is realistic enough to allow researchers to work directly with it. Such a definition will be robust in the sense that it is simple, has a close proximity to the concept to be defined, and allows a certain degree of progress to be made. His working definition of intelligence can be categorized as an adaptive one, in that it asserts that an intelligent machine is one that can adapt to its environment while having only insufficient knowledge and resources. The machine is therefore able to take the initiative to change its knowledge base or reasoning patterns as it confronts novel situations in the environment. He is careful to note what an unintelligent machine would be like, namely one that has been designed with the explicit assumption that the problems it attempts to solve are exclusively those that it has the knowledge and resources for, i.e. such a machine would be "programmed" to tackle certain problems of interest to the user, and would be given only those snippets of knowledge or expertise deemed relevant by this user. If the user were to give an intelligent machine this same collection of problems, it may not be able to find the solution more efficiently than the unintelligent one (or even find the "correct" solution), as the time scales needed for adaptation may be too long relative to the time needed for the unintelligent machine to solve the problems. The author recognizes this possible degradation in performance when using an intelligent machine, and such an issue will be very important when decisions are being made to deploy intelligent machines in time-critical situations or in situations where human or animal health is at stake.

Wang calls his version of AGI the 'Non-Axiomatic Reasoning System' (NARS) which deploys 'experience-grounded semantics', the latter of which is too be distinguished from the 'model-theoretic' semantics that is used in ordinary computing machines and is the foundation of much of theoretical computer science. In NARS, truth is dependent on the amount of evidence that is available, as is the meaning essentially. Wang also discusses in detail the need for `categorical logic' for knowledge representation, again since the machine is expected to operate with insufficient knowledge and resources, where `evidence' plays the key role in deciding the truth of statements (and not mere assignments of `T' or `F'). The NARS system will arrive at a solution that is `reasonable,' i.e. an optimal solution based on the knowledge it has at the time. Mistakes of course can be made, and in fact should be made, since otherwise the machine cannot learn from experience (even though trial and error learning is within the author's boundaries of what he considers intelligent). Therefore, an intelligent machine of the NARS type will not be "fool proof and incapable of error" to quote a line from a popular Hollywood movie. It will however constantly update it its knowledge base, a feature that the author calls `self-revisable'. He does not really say if such a machine could exhibit curiosity, i.e. do the problems it attempts to solve have to be instigated by the user or does it take the initiative to explore new knowledge bases or domains? If so, then such a machine might cause problems in deployment, since it can wander in conceptual space and not focus on the problems it was put in place to solve. However he does allow for autonomous behavior and creativity in the machine, even to such a degree that it completely loses track of the input tasks, i.e. the input tasks become `alienated' to use his words. In this regard, a NARS machine is somewhat like a human philosopher, for it can explore large conceptual spaces on its own and possibly get lost in them. Or more positively, it can find new knowledge that it did not possess before and construct concepts novel to itself (i.e. express `local creativity').

There are many other interesting discussions throughout the book, with each author outlining his/her notion of what it means for a machine to be intelligent and various strategies for constructing intelligent machines. One of these, called the Novamente project has been widely discussed in online messaging and is probably the oldest attempt to bring about AGI of those discussed in the book (at least from the standpoint of its origins). Particularly interesting in the Novamente project is its connection with dynamical systems, specifically in the role of attractors. Even though they do not mention it, the property of `shadowing' in the theory of dynamical systems may be a fruitful one for them to consider, especially in their use of `terminal attractors'. The shadowing property, if possessed by the `mind' of Novamente, would guarantee that an arbitrary dynamic pattern may not be a true `concept map' (as the authors define concept map), but it would be an approximation to some concept map. The shadowing property would guarantee that the reasoning patterns would be domain-independent, since any concept map acting on a particular domain, could be represented or approximated by some reasoning pattern. This reviewer does not know if the shadowing property has been applied to artificial intelligence, or even to neural networks, but if the dynamical systems paradigm holds in the latter, it does seem like an idea that may hold some promise, however small, for the development of domain-independent artificial intelligence. ... Read more


73. Artificial Intelligence: A Beginner's Guide (Beginner's Guides)
by Blay Whitby
Paperback: 160 Pages (2008-09-25)
list price: US$14.95 -- used & new: US$6.97
(price subject to change: see help)
Asin: 185168607X
Average Customer Review: 4.0 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
Tomorrow begins right here as we embark on an enthralling and jargon-free journey into the world of computers and the inner recesses of the human mind. Readers encounter everything from the nanotechnology used to make insect-like robots, to the computers that perform surgery and, reminiscent of films like Terminator, computers that can learn by teaching themselves. ... Read more

Customer Reviews (1)

4-0 out of 5 stars A nice and clever aproach for begginers.
I've just read the Brazilian edition. I'm a lawyer expert on computer law and researcher on robot civil law. The book offers a clever analisys on several topics regarding A.I. The author presents an imparcial presentation of trends, reasearch in course and some times show his opinion. Every single topic is followed by indication on complementary read. Brazilian translation has some minor flaw, but is OK. On the final chapter the Author seams not to believe in a real inteligent machine coming soon. It's very clear Whitby is an expert on A.I. I do recommend it as a first aproach to A.I. It's a great book on a small pack. ... Read more


74. Artificial Intelligence and Simulation.
Paperback: 711 Pages (2005-03-24)
list price: US$119.00 -- used & new: US$104.84
(price subject to change: see help)
Asin: 354024476X
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description

This book constitutes the refereed post-proceedings of the 13th International Conference on AI, Simulation, and Planning in High Autonomy Systems, AIS 2004, held in Jeju Island, Korea in October 2004.

The 74 revised full papers presented together with 2 invited keynote papers were carefully reviewed and selected from 170 submissions; after the conference, the papers went through another round of revision. The papers are organized in topical sections on modeling and simulation methodologies, intelligent control, computer and network security, HLA and simulator interoperation, manufacturing, agent-based modeling, DEVS modeling and simulation, parallel and distributed modeling and simulation, mobile computer networks, Web-based simulation and natural systems, modeling and simulation environments, AI and simulation, component-based modeling, watermarking and semantics, graphics, visualization and animation, and business modeling.

... Read more

75. Law, Computer Science, and Artificial Intelligence
by Ajit Narayanan, Mervyn Bennun
 Paperback: 288 Pages (1998)
list price: US$24.95 -- used & new: US$12.95
(price subject to change: see help)
Asin: 1871516595
Average Customer Review: 4.0 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
This text examines the interaction between the disciplines of law, computer science and artificial intelligence. The authors' hypothesis is that in years to come law will have a severe impact on computer science (via data protection and copyright); that computers will have an effect on law (via legal databases and electronic presentation of evidence); that law will impact on AI (via liability of intelligent software writers and codes of conduct); and that AI will have an impact on law (via models of legal reasoning and implementations of various statutes). The chapters are grouped into theory, implications and applications sections, in an attempt to identify separate, but interrelated methodological stances. ... Read more

Customer Reviews (1)

4-0 out of 5 stars a good collection of papers about AI and law
This book covers many perspectives about artificial intelligence and law. It's easy to read and follow where you stay at your own major. ... Read more


76. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence
by John H. Holland
Paperback: 228 Pages (1992-04-29)
list price: US$29.00 -- used & new: US$25.65
(price subject to change: see help)
Asin: 0262581116
Average Customer Review: 4.0 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
Genetic algorithms are playing an increasingly important role in studies of complex adaptive systems, ranging from adaptive agents in economic theory to the use of machine learning techniques in the design of complex devices such as aircraft turbines and integrated circuits. Adaptation in Natural and Artificial Systems is the book that initiated this field of study, presenting the theoretical foundations and exploring applications.In its most familiar form, adaptation is a biological process, whereby organisms evolve by rearranging genetic material to survive in environments confronting them. In this now classic work, Holland presents a mathematical model that allows for the nonlinearity of such complex interactions. He demonstrates the model's universality by applying it to economics, physiological psychology, game theory, and artificial intelligence and then outlines the way in which this approach modifies the traditional views of mathematical genetics.Initially applying his concepts to simply defined artificial systems with limited numbers of parameters, Holland goes on to explore their use in the study of a wide range of complex, naturally occuring processes, concentrating on systems having multiple factors that interact in nonlinear ways. Along the way he accounts for major effects of coadaptation and coevolution: the emergence of building blocks, or schemata, that are recombined and passed on to succeeding generations to provide, innovations and improvements.John H. Holland is Professor of Psychology and Professor of Electrical Engineering and Computer Science at the University of Michigan. He is also Maxwell Professor at the Santa Fe Institute and is Director of the University of Michigan/Santa Fe Institute Advanced Research Program.Amazon.com Review
John Holland's Adaptation in Natural and ArtificialSystems is one of the classics in the field of complex adaptivesystems. Holland is known as the father of genetic algorithms andclassifier systems and in this tome he describes the theory behindthese algorithms. Drawing on ideas from the fields of biology andeconomics, he shows how computer programs can evolve. The bookcontains mathematical proofs that are accessible only to those withstrong backgrounds in engineering or science. ... Read more

Customer Reviews (4)

4-0 out of 5 stars Heavily mathematical
Good, however, the Amazon.com listing did not say that this text was geared for Ph.D.'s in Mathematics.

5-0 out of 5 stars The founder's words
This is a wonderful time. We can read about information theory in Shannon's own words, fuzzy logic in Zadeh's, relativity in Einstein's, and genetic programming in Holland's. He created evolutionary algorithms, and shares his thoughts in this brief work.

1975, when he first published this work, was a long time ago. Since then, computing has advanced, computing demands have advanced, and biology has advanced. Biology, because it functions at all the levels from atoms to worlds, has bottomless potential for insight. Because the atoms, the worlds, and everything between are all unfriendly, biology has many problems to solve. It doesn't matter whether you are an oak tree, a virus, or a whale, the solution (at the species level) is the same: evolve. Holland was the first to harness that incredible problem-solving power to computational use.

A huge literature has built up from Holland's founding thoughts. Those thoughts are here, in their original and purest form. It is hardly surprising that Holland anticipated so many elaborations of his work. One, in particular, struck me: the idea of 'hot spots' for genetic crossover. Or rather the opposite: 'cold spots' where crossover is inhibited. As a computer scientist, Holland's first thoughts were written in binary. When you allow points where crossover can not occur, you allow coherent multibit values - maybe even floating point. It's easy to laugh at Holland's initial naivete now, but he was talking about the foundations, not the structure built up from it.

If you have ever programmed genetic algorithms, you have been stunned by their effectiveness in creating good solutions. 'Good' doesn't mean precisely optimal, but pretty damm good anyway.

If you were a hard core creationist to start with, you still are. But now you know that evolutionary problem solving is powerful, broad, subtle, and effective - so much, that it's hard to believe it could ever have arisen by chance.

//wiredweird

4-0 out of 5 stars Not an Introductory book
I am learning by myself the topic of Genetic Algorithms (GA) for my PhD dissertation. Even though this book is written for John H. Holland considered the father of Genetics Algorithms, this is not a basic or easy reading book.The book does not contain any source code and even though it contains some kind of pseudocode, it will not give you a clear idea about how to implement a GA. If you want an introduction book maybe you should look for the Mitchell Melanie's book "An Introduction to Genetic Algorithms" , Fogel's book "Evolutionary Computation vol. 1" or Chamber's book "The Practical Handbook of Genetic Algorithms".
The way the author approaches the development of the framework is sometimes overwhelming because the author does not concentrate in one specific case or concept but he mentions all the different possibilities almost at the same time. I think it is worthwhile to buy the book to have it for advanced understanding of the concepts involved in the study of Complex Adaptive System. My approach to learn GA will be reading the above mentioned books and then study this book in a very detailed and slowly way to digest the huge amount of concepts and information provided by it.

4-0 out of 5 stars Genetic Algorithms Classic for Engineering
This book presents an inspirational synthesis from mathematics, computer science and systems theory addressing genetic algorithms and their role in intelligent engineering/business systems.

Topics include: background, aformal framework, illustrations (genetics, economics, game playing,searches, pattern recognition and statistical inference, control andfunction optimization, and central-nervous system), schemata, the optimalallocation of trials, reproductive plans and genetic operators, therobustness of genetic plans, adaptation of coding and representations, andoverview, interim and prospectus.

Inclusion of a disk ofspreadsheet-based examples would have increased user-friendliness to thesometimes moderately-complex mathematics. Otherwise, this book is a wellpresented, and useful classic for researchers and software vendors seekingto develop more innovative intelligent products. ... Read more


77. Game Development Essentials: Game Artificial Intelligence
by Jr., John B. Ahlquist, Jeannie Novak
Paperback: 320 Pages (2007-07-09)
list price: US$68.95 -- used & new: US$37.00
(price subject to change: see help)
Asin: 1418038571
Average Customer Review: 4.0 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
Written by experts with years of gaming industry experience developing today's most popular games, Game Development Essentials: Game Artificial Intelligence provides an engaging introduction to "real world" game artificial intelligence techniques. With a clear, step-by-step approach, the book begins by covering artificial intelligence techniques that are relevant to the work of today's developers. This technical detail is then expanded through descriptions of how these techniques are actually used in games, as well as the specific issues that arise when using them.With a straightforward writing style, this book offers a guide to game artificial intelligence that is clear, relevant, and updated to reflect the most current technology and trends in the industry. ... Read more

Customer Reviews (1)

4-0 out of 5 stars Interesting Read
This is one of three books that I purchased of the series. It starts off with an introduction of AI in games and is very well written. The book is structured very well and has nice images thrown in of various games that are being referred to.
As a bonus, the accompanying CD contains some of the code from the book. ... Read more


78. Artificial Intelligence (SIE): 3/e
by Dr. Elaine Rich
Paperback: 588 Pages (2010-01-13)
list price: US$35.00 -- used & new: US$27.76
(price subject to change: see help)
Asin: 0070678162
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
This book presents both theoretical foundations of AI and an indication of the ways that current techniques can be used in application programs. With the revision, most of the content has been preserved as it is, and an effort has been put in on adding new topics that are in sync with the recent developments in this field. ... Read more


79. Adaptive Business Intelligence
by Zbigniew Michalewicz, Martin Schmidt, Matthew Michalewicz, Constantin Chiriac
Paperback: 246 Pages (2010-11-30)
list price: US$59.95 -- used & new: US$48.31
(price subject to change: see help)
Asin: 3642069487
Average Customer Review: 4.5 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description

Adaptive business intelligence systems combine prediction and optimization techniques to assist decision makers in complex, rapidly changing environments. These systems address fundamental questions: What is likely to happen in the future? What is the best course of action? Adaptive Business Intelligence explores elements of data mining, predictive modeling, forecasting, optimization, and adaptability. The book explains the application of numerous prediction and optimization techniques, and shows how these concepts can be used to develop adaptive systems. Coverage includes linear regression, time-series forecasting, decision trees and tables, artificial neural networks, genetic programming, fuzzy systems, genetic algorithms, simulated annealing, tabu search, ant systems, and agent-based modeling.

... Read more

Customer Reviews (2)

4-0 out of 5 stars Good introduction
Just right for someone with some technical knowledge who is new to this area. Very readable, simple but not dumbed-down.

5-0 out of 5 stars Great book for busy managers and students
This is a great book about a new topic called "Adaptive Business Intelligence". It is basically about modern Artificial Intelligence (AI)that is used to solve tough real-life optimization and prediction problems. The authors really know what they are talking about and have many years of business and teaching experience. They are also running a software company specializing in Adaptive Business Intelligence.

I was skeptical when I picked up this book and thought to myself that they can't possibly explain all those modern AI techniques without formulas. However, the book is very easy to read and really does cover modern AI techniques without getting into a lot of nitty gritty details (i.e., no formulas). They also have a detailed real-life example that they frequently refer to. The example is very good and I quickly saw how I could use Adaptive Business Intelligence.

The authors explain everything down to earth and on top of that it was fun to read and also reads well. I also liked that they have some references to other books in case you want to dive into a particular topic.

All in all it's a great book that makes modern science easy to understand for busy people. Even non-technical managers like myself will have no problems understanding the different techniques and how they apply to their own business. I highly recommend this book. It's definitely worth reading!

PS: I borrowed the book to my sister who studies IT in California and she told me that it's a great introduction to modern AI. She actually never returned the book since she kept as a reference for one of her classes. ... Read more


80. Autonomy Oriented Computing: From Problem Solving to Complex Systems Modeling (Multiagent Systems, Artificial Societies, and Simulated Organizations)
by Jiming Liu, XiaoLong Jin, Kwok Ching Tsui
Paperback: 216 Pages (2010-11-02)
list price: US$99.00 -- used & new: US$99.00
(price subject to change: see help)
Asin: 1441954805
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description

Autonomy Oriented Computing is a comprehensive reference for scientists, engineers, and other professionals concerned with this promising development in computer science. It can also be used as a text in graduate/undergraduate programs in a broad range of computer-related disciplines, including Robotics and Automation, Amorphous Computing, Image Processing, Programming Paradigms, Computational Biology, etc.

Part One describes the basic concepts and characteristics of an AOC system and enumerates the critical design and engineering issues faced in AOC system development. Part Two gives detailed analyses of methodologies and case studies to evaluate AOC used in problem solving and complex system modeling. The final chapter outlines possibilities for future research and development.

Numerous illustrative examples, experimental case studies, and exercises at the end of each chapter of Autonomy Oriented Computing help particularize and consolidate the methodologies and theories presented.

... Read more

  Back | 61-80 of 100 | Next 20

Prices listed on this site are subject to change without notice.
Questions on ordering or shipping? click here for help.

site stats