Diplomarbeit, 2002, 73 Seiten
To mum and dad
Table 1: Real-world agents and their application in transaction processes................................. 18 Table 2: SWOT Analysis: Strengths and their internal impact................................................... 39 Table 3: SWOT Analysis: Weaknesses ...................................................................................... 40 Table 4: SWOT Analysis: Opportunities .................................................................................... 41 Table 5: SWOT Analysis: Threats .............................................................................................. 41
Figure 1: Agent Interaction Scenario .......................................................................................... 15 Figure 2: Real-world agents and their position in the agent interaction scenarios ..................... 19
ACL Agent Communication Language ACM Automated Collaborative Filtering AI Artificial Intelligence B2B Business-to-Business B2C Business-to-Consumer C2C Consumer-to-Consumer CAS Computer Aided Selling CBB Consumer Buyer Behaviour CRM Customer Relationship Management ECCMA Electronic Commerce Code Management Association FAQ Frequently Asked Questions FIPA Foundation of Intelligent Physical Agent IT Information Technology KIF Knowledge Interchange Format KQML Knowledge Query Manipulation Language ODB Online Dynamic Bidding OPS Open Profiling Standard SFA Sales Force Automation SRM Supplier Relationship Management SWOT Strengths, Weaknesses, Opportunities & Threats UCEC Universal Content Extended Classification UN/SPSC Universal Standard Products and Services Classification W3C Word Wide Web Consortium WWW World Wide Web
Artificial intelligence has already been applied to many areas since its official birth in 1956, but most of the applications ended up in great disappointments as the benefits they reaped were very low (Andriole & Hall, 2000, p.17). Due to this reason the vast interest in applying this relatively young technology to business calmed down in the late seventies when scientists recognized that the current intelligent systems were not yet plug-and-play solutions, hence mature enough to fully meet the business needs and requirements at that time.
However, the limited commercial applicability of artificial intelligence in the past has to be rethought today as with the significant progress in artificial intelligence research and the growth of electronic commerce conducted over the World Wide Web new opportunities for business applications of artificial intelligence have emerged consequently. Nowadays horizontal and vertical electronic commerce is significantly driven by intelligent applications. Their employment in electronic businesses “may well generate huge returns on investments, providing a technology-based response to increasing competition, the volatility of business models, and the pace of technology change” (Andriole & Hall, 2000, p.18). Despite the wide assumption that artificial intelligence will have a major impact on Internet-related businesses today and especially in the next years to come, it is uncertain to what extent it performs and will perform that way.
The purpose of this thesis is to analyse, assess and evaluate the potential of commercial applica- of artificial intelligence in electronic businesses. Therefore the main research question of this paper is whether artificial intelligence is reasonably applicable in Internet-related businesses, first in terms of effectiveness and second in terms of efficiency. In the assessment the application of artificial intelligence in electronic businesses is represented by the employment of intelligent agents.
In harmony with the major research question emphasized above, the paper provides a thorough discussion about the economic impact of the most common and relevant application types of intelligent agents on electronic commerce environments. In addition the driving underlying technologies of intelligent agents are analysed with respect to artificial intelligence techniques and methods, and current standardisation efforts.
The assessment itself constitutes of theoretical and practical instruments that measure the commercial applicability of artificial intelligence in electronic businesses. First, the effectiveness of employing intelligent agents will be measured with a cost-benefit analysis to prove whether it is the right thing to do for an electronic business. Second, the efficiency of such an application will be assessed with a detailed SWOT-Analysis in order to determine whether employed agents do their job right. Finally, the results from a range of expert interviews with dominating developers in the field of intelligent systems technology will be integrated into the assessment. Expertinterviews as a research method seem to be appropriate for this assessment as they investigate
the phenomenon within its real-life context. Furthermore they extend experience or add strength to what is already known through previous research.
In a final summary and evaluation the answer to the given research question is provided. By the end of the thesis, the reader should have gained a strong comprehension of the linkages between electronic commerce and intelligent agents and should understand the resulting implications of the technology’s application on electronic businesses.
The thesis is divided into four core parts: First, the need and relevance for intelligent solutions in electronic marketplaces is emphasized in section two. This includes the identification of today’s challenges for mediators in electronic commerce environments. The need for intelligent solutions in Internet commerce can be satisfied by solutions that stem from artificial intelligence research, such as intelligent agents. Therefore the third section provides thorough background knowledge about the meaning of intelligence and the way it is reproduced in computer programs. Part of this section is also the introduction to intelligent agents, software programs that embed various levels of artificial intelligence to solve problems in commercial environments on the Internet. Their role as mediators in distinct business scenarios on the Web will be presented by stressing important prototypical and real-world examples in these arenas. After having identified the need for intelligent systems and the potential solution of employing intelligent agents in electronic businesses in the second and third section, the fourth and fifth section attempt to answer the question whether these intelligent agents are really able to effectively and efficiently cope with the identified problems and business needs. The fourth section deals with the analysis of the five most common application areas of intelligent agents in electronic commerce environments and the examination how they technologically cope with the challenges in these areas. The fifth section attempts to prove whether it is profitable for electronic businesses to employ intelligent agents or not. First, a cost-benefit analysis evaluates the effectiveness of commercial applications of intelligent agents. Then a detailed SWOT analysis assesses the efficiency of intelligent agent technology in electronic businesses. The assessment is complemented by a series of interviews conducted with experts in the field of intelligent agents. Finally, the thesis ends up with a precise summary. Moreover, the inferences drawn throughout the entire paper will be aggregated and the main research question answered.
As Schulten defines, “E-commerce is about electronically doing business - that is, putting the business process flows of companies on web-enabled technology” (Fensel, Ding et al., 2001, p. 12). However, web-enabled technology alone will not be able to cope with the emerging challenges in electronic commerce environments.
In this introductory section the need for and the relevance of intelligent solutions in commercial electronic environments are discussed. The development of the market and business characteristics in these environments will be highlighted pointing out the potential areas where and howintelligent solutions might be applied.
Our society has changed into an information age, in which information is considered as a useful tool to solve many problems. During the last years the amount of information coming from an increasing number of sources from all over the world has multiplied itself several times. But the masses of information are rather unstructured what makes it more and more difficult for human beings to get a clear picture of all information suppliers and to seek and take the needed information of relevance into account. Herman states that the Internet has changed from a supplierdriven to a demand-driven information market in which the supply of information has become less important rather than the demand. He claims that on the one hand it might seem that the gigantic pool of information meets the need for information demand sufficiently in the first place, but on the other hand the amount of information is so vast and so unstructured that users sometimes cannot even find relevant information at all. In this context Herman talks about the “information overkill” (Hermans, 1996, pp. 4-5).
However, conventional search methods attempt to reduce the mentioned problem. The most common solutions are search engines, huge databases with indexes that allow user to check whether certain information can be found in the database. If matches are found, the databases will provide users with the information where the requested information can be found on the Internet. But these methods fail due to variety of reasons: First of all, the absence of a central supervision for the Internet growth and development leads to a poor information transparency, thus giving search engines a hard time. As an example, information that appeared in the past can suddenly disappear (and vice versa) or move to a different unknown location on the Web. Another weakness is that information on the Internet is rather heterogeneous, which means that information appears in different formats making it even harder to automatically search and retrieve relevant information (Hermans, 1996, p. 5).
Today a solution is required that is capable of more intelligent information search and retrieval, maybe through the use of thesauri or similar concepts in order to minimize the retrieval of irrelevant and noisy information. Such solution should have some sort of knowledge base in which information about the multiple information sources on the Internet is stored and updated frequently, and some problem solving abilities to perform complex tasks more efficient and faster. Users should only need to consider what information they seek; the questions about where and how this particular information can be retrieved should not be of the users’ concern. As information search and retrieval can be quite a time-consuming job, a solution should be capable of working in conjunction with other information systems and software applications to gain the information requested by the user in a time- and cost efficient manner. Not only context-sensitivity, which is required to distinguish between relevant and irrelevant information on the Internet, but also the adoption to and the consideration of the user’s individual needs and wishes becomes highly significant when looking for relevant information. Furthermore, a solution should be that much intelligent that it is able to learn from the experiences gained from performed tasks, and to react upon results (Hermans, 1996, p. 6-8). These requirements cannot be performed by regular search engines or comparable alternatives as they all lack the mentioned intelligent abilities necessary to cope with the new challenges of today’s information world. A solution is required quickly; otherwise the loss of control over the Internet growth and development is the consequence resulting in the Internet’s limited usefulness. At this stage Herman points out that scientists intend to draw up a new structure for theInternet that makes it more conveniently and easy to use. A very promising idea is a three-layer
concept involving users, suppliers and intermediaries (in contrast to the current dominance of two-layer structures in the Internet). In this concept intermediaries gain major importance as they are supposed to address and even resolve the mentioned weaknesses of the Internet with the use of intelligent technologies (Hermans, 1996, pp. 8-9). At this stage, the role of these middle-layers in electronic markets has to be analyzed; that is subject of the following section.
In order to understand the principles of intermediaries it is important to compare their different roles in traditional and electronic markets.
Basically, a market is a place where buyer and seller cross path. In its original concept, a market is described as a place where suppliers and demanders come together at a particular location and a certain time in order to trade (Achleitner & Thommen, 1998, p. 148). According to the neoclassical paradigm the price is the factor that is in charge of the economic allocation of the limited resources, thus establishing an equilibrium between supply and demand (Felderer & Homburg, 1999, p. 51). In this paradigm the market is characterised by complete market transparency, the homogeneity of goods, the immediate reaction upon changes of market conditions, the principle of maximising one’s utility, and the lack of market entry barriers and user preferences (Achleitner & Thommen, 1998, pp. 239-240). This is an idealistic assumption that is far away from reality. In contrast to these perfect markets, imperfect markets are characterized by the presence of transaction costs that incur in several stages of trading processes such as through the search for suitable products (product brokering), the search for appropriate suppliers (merchant brokering), the settlement of the terms and conditions of the trade (negotiation) and the execution of contractual agreements, which incur additional costs for monitoring and controlling efforts (Jüttner, 1996).
In comparison to conventional markets in the real world, electronic markets come closer to the ideal of perfect markets, because the employment of innovative Internet and communication technologies allows market participants to reduce these transactions costs (Pauk, 1997). Therefore Schmid defines electronic markets as markets that are supported by technological mechanisms that facilitate the processes involved in all transaction phases (Schmid, 1993, pp. 465- 480). However, the reduction of transaction costs leads to significant advantages for both buyer and seller in electronic markets (Malone, Yates & Benjamin, 1987, p. 488). The buyer has got advantages such as the reduced coordination costs, the higher degree of buyer-seller interaction due to more cost-efficient ways to communicate, the increase in product and service quality due to a wider selection range of potential vendors, the increased customer focus of businesses due to the increase in competition, the widened variety of products and services, and the lower prices (due to the smaller profit margins as a consequence of increased competition). The advantages on the seller side are that businesses can use the Internet as an additional distribution channel and establish direct relationships with customers without requiring intermediaries in their traditional role any more. Furthermore, businesses in electronic marketplaces are able to segment their customers more efficiently, to increase their product and service diversity, to satisfy the information needs of their customers at a higher level, and to bypass price competition by specialising in market niches. The communication costs for businesses are significantly reduced as well. Since market entry barriers for electronic markets are rather low, entering intothese markets is rather cheap and easy. As stated above, electronic markets are highly attractive
due to the absence of dependencies on space and time. This allows market participants to react faster upon market changes (Jüttner, 1996; Pauk, 1997; Schlueter, 1997). In traditional markets the mentioned transaction phases can be supported or even entirely executed by third parties. In that case these parties are called intermediaries. Their functions comprise the aggregation of supply or demand (as for example wholesalers who collect the demand of many customers to achieve better price conditions), and the protection against opportunistic market participants (as opportunistic market participants might want to benefit from the advantages of intermediaries in the future as well, they renounce of their opportunistic behaviour) and difficulties that can occur during business transactions (intermediaries are flexible and can provide alternative products, services or even suppliers on the fly). Further functions of intermediaries include the facilitation of communication between buyer and seller (facilitation of product and information brokering phases as intermediaries know more about the particular the local market than the individual end-customer), and the matching of buyers and sellers (the teamwork among intermediaries and various buyers and sellers in marketplaces enables intermediaries to facilitate merchant brokering). However, while traditional intermediaries attempt to reduce transaction costs on the one hand they increase the costs of the products and services for end-customers on the other hand (because of the added value for end-customers). As a result, many selling businesses have attempted to directly approach end-customers with their products and services, thus bypassing intermediaries. However, profitable applications of these direct channel concepts depend largely on the particular type of products/services and customers. Most often the contact to and the knowledge about the end-customers is lacking, and renouncing of intermediaries is not considered to be wise (Pauk, 1997).
The characteristics of electronic markets, as stated above, enable electronic businesses to establish these direct connections to end-customers easily. According to Schoder and Müller, due to this reason it becomes more and more cost-efficient for businesses to internalize formerly outsourced services, perform vertical integration and thus eliminate intermediate stages in business transactions. That enables them to offer their products directly and cheaper to the end-customers (Sarkar, Butler & Steinfield, 1995; Schoder & Müller, 1999, pp. 2-4). In contrast to this so called disintermediation hypothesis, which bases on the loss of importance of intermediaries in electronic markets, the intermediation hypothesis claims the need for intermediaries in electronic marketplaces (Schoder & Müller, 1999, p. 4; Wigand & Benjamin, 1995). The hypothesis puts more weight on the need for dealing with the “information overload” on the Internet, the need for a stronger Taylorism and coordination which are provided by the services of intermediaries (Schoder & Müller, 1999, pp. 3-4). Indeed, the Internet is characterised by the “unavailability of structured and relevant information […]” (Terpsidis, Moukas, Pergioudakis, Doukidis & Maes, 1997, p. 2) about products, suppliers, etc., thus increasing the marketplace’s complexity and decreasing market transparency. While low transaction costs in electronic marketplaces will make intermediaries’ traditional role as aggregators of supply and demand less important, they gain new roles in these marketplaces as for example the provider of more market transparency for customers (in product or information brokering stages). Communication among trading partners and customers has become self-explanatory in electronic marketplaces, thus eliminating a further traditional role of intermediaries. However, Pauk (1997) is of the opinion that because Internet-based marketplaces transfer higher quantities of information among their actors, further innovative roles for intermediaries arise. Even for merchant brokering there are new application areas for intermediaries in providing additional relevant information about potential trading partners (Pauk, 1997).
Bailey and Bakos summarize that although traditional intermediaries may disappear in elec- markets there are still new roles emerging for them (Bailey & Bakos, 1997, p. 13). But these are rather difficult to perform and require intelligent technologies “that can sense, understand and act in the specific context in which e-business actors work” (Akkermans, 2001, p. 8). As the demand for information gains more and more significance electronic business today has to be viewed from a different perspective: The roles of intermediaries change in electronic marketplaces; so do business transactions. Generally, business transactions base on a series of subsequent stages of buyer-seller interaction. These interaction stages, currently loaded with inefficiencies and transaction and opportunity costs are the places where intelligent solutions are needed. Therefore the following section analyses the different stages of business transactions in electronic marketplaces from a buyer’s and seller’s perspective.
One of the trends derived from the rise of the Internet is the migration from mass production to mass customization, hence one-to-one marketing. The customers’ demand for information, products or services, and their active involvement in transaction processes becomes more and more important for the success of an electronic business. The buyer’s significance in transaction processes has been analyzed since the sixties and brought up the marketing-based Consumer Buyer Behaviour model (CBB). It describes the actions and decisions involved in the process of buying goods or services. To suit the characteristics of electronic marketplaces it has been augmented accordingly. However, the adapted model suffers from a couple of limitations such as its sole focus on retail business (B2C), excluding concepts such as B2B or C2C that emerged through the Internet. In addition, the model does not cover all possible buyer behaviours and has difficulties in taking all issues related to electronic business into account. Nonetheless, Maes, Guttman and Moukas believe that it is still powerful enough to support the understanding of the basic stages of business transactions in electronic commerce (Maes, Guttman, & Moukas, 1998, p. 22).
Concerning the traditional Consumer Buyer Behaviour model, there are a variety of theories and models proposed in the past. However, the common ideas of these models have been extracted and put into a framework that suits today’s Internet commerce characteristics (Terpsidis, Moukas, Pergioudakis, Doukidis & Maes, 1997, pp. 3-4; Maes, Guttman, & Moukas, 1998, p. 22; Maes, Guttman & Moukas, 1999, pp. 81-91).
This framework consists of six stages and is described in the following paragraphs: 1. The first stage describes the need identification or problem recognition phase, where the consumer gets aware of an unmet need. The awareness of needs can be stimulated by the seller through product information. 2. The second stage comprises the information brokering or search process, where information is retrieved by the consumer to determine which product to buy. An evaluation of the gathered product alternatives is done based on consumer’s criteria. An evoked set of productsresults from the evaluation.
3. The gained set is used in this stage in combination with merchant-specific information to determine from which seller to buy from (merchant brokering). The merchant is selected by consumer-specific criteria such as price, delivery time, warranty, reputation and many more. These criteria basically base on the four Ps of marketing (product, place, price and promotion). 4. The purchase decision is done in the following stage where the terms of the transaction are negotiated with the potential seller. In many markets negotiation is an internal part of the product and merchant brokering process. 5. The actual purchase and delivery is the consequence of the negotiation. However, even if the purchase decision has already been made, the actual purchase can take place after some time as well. 6. The final stage of the CBB model constitutes of customer and product service, including the evaluation of the consumer’s overall satisfaction or dissatisfaction of the buying process and the rightfulness of his or her purchase decision. Even though these stages may overlap and are only approximations of the real-world complexity in buyers’ behaviours the CBB model still has got enough strength to be applied in various fields of business. (Terpsidis, Moukas, Pergioudakis, Doukidis & Maes, 1997, pp. 3-4; Maes, Guttman, & Moukas, 1998, p. 22).
The first three introductory sections have pointed out, that mediators in electronic marketplaces differ significantly from intermediaries in traditional markets. They have to face different challenges and must therefore perform distinct tasks. One of the most crucial challenges is this context is the increasing quantity of information. Where conventional solutions fail to cope with these challenges efficiently the need for more intelligent solutions emerges. In other words, intermediaries require some sort of intelligence to ensure effective and efficient business transactions and to shield actors in electronic marketplaces from the Web’s complexity.
As stated in the previous sections, the new roles of mediators in electronic commerce require intelligent mechanisms to successfully cope with the challenges of today’s electronic marketplaces. Solutions for re-producing intelligence in electronic environments stem from artificial intelligence research. This part of the thesis provides a compact introduction to the concept of artificial intelligence emphasizing its meaning in context, the characteristics it has got and the aims it follows. Furthermore, the most common application of artificial intelligence in elec- tronic commerce, the intelligent agent, is introduced afterwards.
Artificial Intelligence (AI) is a comprehensive and multi-disciplinary field. It has definitely its roots in the area of computer science but has also close relationships to psychology, philosophy, logic, linguistics, engineering and even neurophysiology (IBM Research, 2001). There are a variety of definitions and explanation approaches for artificial intelligence but there is still no common agreement on what term exactly comprises. The reason for that is that there is also no real consensus about the meaning of ‘intelligence’ itself (Moursund, 1999; Neisser et al., 1996). However, the most famous one stems from Minsky (1968), who stated: “[...] artificial intelligence is the science of making machines do things that would require intelligence if done by men” (Decker & Hirshfield, 1998, p. 299, cited after Minsky, 1968, p. 23).
As already stated above, the source for the debates about artificial intelligence derives from the attempt to define the term intelligence. There are many theories and approaches proposed by famous researchers such as Gardner, Perkins, or Sternberg (Moursund, 1999; Neisser et al., 1996).
Howard Gardner’s theory of multiple intelligences was suggested in his book ‘Frames of Mind’. He hypothesized the presence of the eight components of intelligence that are aligned with different academic disciplines: Interpersonal, Intrapersonal, naturalistic, musical, bodilykinaesthetic, linguistic, logical-mathematical, and spatial (Moursund, 1999; Neisser et al., 1996).
Similar to Gardner’s theory is the one of Robert Sternberg. In contrast to Gardner, he does not relate the components of intelligence to academic disciplines; instead, he breaks down intelligence into three different fundamental factors: practical intelligence, experiential intelligence and componential intelligence. Practical intelligence deals with the ability to adapt to and shape one’s environment whereas the experiential intelligence addresses one’s ability to deal with novel situations, the ability to automate ways of dealing with novel situations so that they can be performed more easily in the future and the ability to think in novel ways. The componential intelligence comprises the ability to process information effectively. Sternberg believes that human intelligence can be strengthened through study and practise (Moursund, 1999; Neisser et al., 1996).
The theory of David Perkins is similar to Sternberg’s theory although his research stresses the support of Gardner’s theory of multiple intelligences. He claims human intelligence to consist of three major dimensions: Neural intelligence, experiential intelligence and reflective intelligence. Neural intelligence deals with the efficiency and precision of one’s neurological system, experiential intelligence refers to one’s accumulated knowledge and experiences (one’s expertise), and reflective intelligence is concerned with one’s strategies for problem solving, learning and approaching intellectually challenging tasks (Moursund, 1999).
Although these theories contain some plausible and traceable thoughts, neither one of them ever gained universal acceptance nor could answer all open questions. It is just that we know too little about our human brain. However, it is possible to extract some composite facts all theories have in common. Then intelligence can be seen as the ability to learn, pose and solve problems. The ability to learn is accomplished by combining experience, education and training. Posing
problems includes recognizing problem situations and transforming them into more clearly and precisely defined problems, whereas solving problems comprises accomplishing tasks and doing complex projects. So, intelligence can be described as a combination of diverse abilities (Moursund, 1999).
3.1.2 The ‘Artificial’ Intelligence
The science of Artificial Intelligence (AI) can be described as the making of intelligent ma- and computer programs that embed processes such as perceiving, learning, reasoning, generalizing and discovering meaning in context (IBM Research, 2001). In other words it attempts to “re-create in computer software the processes humans use to solve problems” (Andriole & Hall, 2000, p.14). According to John McCarthy, professor at Stanford university and known as the father of artificial intelligence, the strong relationship of the term “intelligence” to our human intelligence makes it hard to define in general what kinds of computational procedures can be called intelligent (McCarthy, 2001). However, artificial intelligence is not limited to the attempt to reproduce or imitate human intelligence in other scientific fields. It also comprises the application of seemingly intelligent methods that must not have been observed in human beings before (McCarthy, 2001). Therefore one can distinguish between two distinct approaches to reincarnate human problem-solving processes in computer software, the bottomup and the top-down approach (McCarthy, 2001; Encyclopædia Britannica, no date). In 1932, Edward Thorndike and in 1949 Donald Hebb suggested that learning capabilities derive from strengthening certain patterns of neural activity by increasing the neuron firing between the associated connections. This was the origin of the connectionist or bottom-up approach. So, achieving artificial intelligence from bottom-up is done by building electronic replicas of the human brain’s complex network of neurons. The idea of rebuilding the human brain was the birth of the neural network theory, which presented some impressive results by mimicking the human thinking processes and recognizing letters. The connectionist approach has the advantage that it is able to model human functions at a lower level such as image recognition, motor control and learning capabilities. Humans learn through a bottom-up approach as well since we all start with nothing when we are born. Just through our own intellect and learning we learn walking, speaking a language and much more. However, this method is often hard to employ in the field of computing (Encyclopædia Britannica, no date).
In 1957, two advocates of the symbolic artificial intelligence, Allen Newell and Herbert Simon, summed up the top-down approach. They claimed that processing structures of symbols is sufficient to produce artificial intelligence in a computer, and moreover, that human intelligence is the result of the same type of symbolic manipulations. In contrast to the bottom-up approach, symbolic artificial intelligence attempts to mimic the brain’s behaviour with computer programs (independent of the biological structure of the brain). In this method all necessary knowledge is already present for the program to use (given that it was pre-programmed in advance). Therefore this method is quite powerful to perform relatively high level tasks such as language processing (Generation 5: The Artificial Intelligence Repository, no date; Encyclopædia Britannica, no date).
As demonstrated above, both approaches have their advantages and disadvantages but each of them fails where the other excels. As the bottom-up approach fails to cope with the complexity of the real neuron interactions, it has not yet achieved the replication of the nervous system of
even the simplest living things. Also top-down approaches work only in simplified environ- and fail when confronted with the real world. However, one can see similarities to the human brain in both approaches. Due to this reason both approaches are pursued today (Generation 5: The Artificial Intelligence Repository, no date; Encyclopædia Britannica, no date). Andriole and Hall point out that artificial intelligence can be considered as one of the strongest methods in problem solving, if accurately targeted. There are many problem types that suit to techniques and tools of artificial intelligence, but for other problems AI-based techniques may be the worst approach. The main differences between conventional and AI-based systems are: While conventional systems process data within certain boundaries AI-based systems apply their knowledge to new unspecified problems within particular domains. Moreover, where conventional systems are rather passive, AI-based systems interact actively with the user. In addition such systems can draw inferences, implement rules of thumb or solve complex issues whereas conventional systems cannot infer beyond certain pre-programmed limits. One of the greatest things about artificial intelligence is that it can be applied to a variety of areas. Taken the example from section 2.1 concerning the masses information on the Internet, AI-based search routines can structure information as knowledge and apply it to various problem-solving tasks. Moreover, Andriole and Hall emphasise that it is important to distinguish between the tools and techniques of artificial intelligence and the areas it targets. Rules, semantic and inference networks, natural language processing or special-purpose programming languages count for such tools. To some of these examples will be referred later in this thesis. Target areas are the applications of AI-based tools and techniques to a particular domain. Such application areas are for example expert systems, decision support systems, robotics, intelligent agents, datamining applications or intelligent content management applications. (Andriole & Hall, 2000, pp.14-16)
However, today everybody is excited about the development and application potential of artificial intelligence but not long ago AI was regarded as a great disappointment. Especially in the late seventies expectations concerning the short-term potential of artificial intelligence were much too high, leading to immense frustrations when only little benefits could be reaped (Andriole & Hall, 2000, p.17). According to Andriole and Hall, “the real pay-off for AI technologies lies in the extend to which they can link to other technologies and applications” (Andriole & Hall, 2000, p.18).
Research in artificial intelligence concentrates mainly on implementing the following compo- of intelligence into computer programs: learning, reasoning and problem solving (Encyclopædia Britannica, no date). − Learning: Learning techniques are either based on trial-and-error principles, rote learning (memorizing individual items), or generalization (applying past experiences to new analogues situations). The first two learning techniques are easy to realize technically and are already implemented in a wide range of applications whereas a much harder challenge is implementingthe ability to generalize (Encyclopædia Britannica, no date).
− Reasoning Reasoning means drawing either inductive or deductive inferences. As deductive inferences are common in mathematics and logic there has already been some considerable success with implementing deductive techniques in computers. However, one of the hardest problems of artificial intelligence is to draw relevant inferences for the solution of a particular situation or task (Encyclopædia Britannica, no date). − Problem solving: The ability to solve problems describes the systematic search through a series of possible actions in order to reach a predefined goal or solution. Methods in this field are distinguished between special-purpose and general-purpose. A special-purpose method is made solely for one problem, whereas general-purpose methods are applicable to multiple types of problems. An example for a general-purpose method is the ‘means-end analysis’, an incremental reduction of the difference between the current state and the final goal. The program selects its actions from a list of means until the goal is reached. These methods have been helping a lot to devise mathematical proofs or even finding the winning sequence of moves in board games (Encyclopædia Britannica, no date). There have been a series of discussions and disputes about the still open questions regarding artificial intelligence, such as of what intelligence constitutes or when a program will be called intelligent. Dreyfus came up with a famous sceptic claiming that computers can never become intelligent unless they will fully comprehend common sense. Common sense bases mainly on know-how and can only be learned through experience. As an analogy, the knowledge required to learn riding a bike cannot be gained from a book. Therefore computers can never become intelligent. It is hard to contradict against his opinion because in some points he is right as artificial intelligence really needs common sense and common sense needs know-how. Most of the defenders of artificial intelligence agree with Dreyfus’s to the extent that creating intelligent computers will not be impossible but difficult to do (Navega, S., 1998). These arguments demonstrate that the artificial intelligence is fraught with many philosophical questions, as even much of our own intelligence is still unknown. However, even if it will be proved that computers cannot have real intelligence they will definitely never have consciousness.
Artificial Intelligence has been applied to electronic commerce in multiple ways. Software agents, decision support systems, applications of expert systems, fuzzy logics, neural, semantic or Bayesian networks, linguistic ontologies, evolutionary programming, natural language interfaces, adaptive user interfaces, data-mining and virtual reality are already or about to be embedded into electronic commerce environments (Finin & Grosof, 1999, pp. 1-133). However, most of the mentioned technologies in this area have still rather a prototypical status, and only few experiences and little knowledge has been gained about their applicability in this field. Therefore this thesis addresses only the most common and until now most successful application ofartificial intelligence in electronic commerce: the Software Agent.
Software agents are a piece of software that acts autonomously on behalf of a principal (Caglayan & Harrison, 1998, p. 9). There is a broad range of definitions about what software agents are. Despite many controversies, there are some common agreed-upon opinions about the characteristics of agents (Clement & Runte, 1999, pp. 2-4; Franklin & Graesser, 1996, pp. 21- 35; Caglayan & Harrison, 1998, p. 10; Papazoglou, 2001, pp. 72-73). − Autonomy: Software agents must be autonomous, which means that they can perform their tasks without the intervention of the principal or other actors in the agent’s environment. Therefore software agents must be capable of providing their services to their principals without any explicit indications of the needed procedures given. − Goal orientation: Software agents attempt to achieve their given or self-defined aims eagerly. They create own approaches to find solutions. − Communication: Software agents require interfaces in order to be able to communicate. An input interface to the principal (either a human or another agent) is needed for receiving data, parameters and specifications about the task to be performed. An output interface is needed to transfer the results to the principal. A common communication protocol is for instance KQML (Knowledge Query and Manipulation Language, see 4.2.2). − Learning aptitude: As described further down in section 3.2.2, the ability of software agents to learn comprises the ability to draw inferences (Bean & Segev, 1997, pp. 7-9). In section 3.1.2, there are two approaches (top-down and bottom-up) discussed that attempt to achieve learning capabilities in machines. − Flexibility: The flexibility of software agents requires that their range of actions is not hard-coded and pre-programmed. Flexibility also includes the ability to react upon changes in the agent’s local environment. − Pro-activeness: Software agents must be capable of initiatively initiating actions. − Rationality: Agents follow strict rules. This is often an advantage as agents do not have emotions that could interfere with their actions. − Coordination: Software agents can coordinate interdependent activities.
− Cooperation: Software agents are capable of collaborative behaviour. They can interact with each other if it helps them completing their tasks. Cooperation is often helpful in reducing the complexity of problems. This can be done for instance by sharing resources. − Mobility: Mobile software agents perform their tasks by copying themselves onto computers within a network in order to perform their tasks in a local environment. Thus, they can reduce network utilisation (see 3.2.3). The characteristics of software agents presented above are linked to their purpose. Therefore the next section describes how software agents actually work.
Although each software agent has its unique way of functioning, Caglayan and Harrison com- a model that explains how agents generally work, emphasising the different dimensions of intelligence embedded in to these systems: knowledge, thinking/reasoning and learning (Caglayan & Harrison, 1998, p. 156; Feldman & Yu, 1999, pp. 45-57). − Knowledge: Knowledge in software agents is represented by information about a domain such as product descriptions or user preferences and rules that can be either ‘if- then’ rules or complex neural networks. Knowledge is stored in a knowledge base that comprises not only static knowledge that has been pre-programmed but also dynamic knowledge which has been acquired through learning or through the interaction with the agent’s environment. (Feldman & Yu, 1999, pp. 45-57). − Thinking: The utilisation of knowledge requires the ability to ‘think’. In order to think, the agents must be able to perceive their environment with sensors and to combine the perceived events with the knowledge they have got. Establishing relationships and linkages between perceived events and knowledge enables the agent to draw inferences, from which actions can be initiated autonomously (Feldman & Yu, 1999, pp. 45-57). − Learning: Software agents augment their knowledge base through learning in order to increase their level of intelligence. This is due to the fact that with more information and rules available better inferences can be drawn resulting to better decisions and results respectively. Learning can be considered as a change in behaviour, resulting from experiences made. Software agents can therefore learn through experiences gained from the linkages created between perceived events and knowledge and the resulting inferences drawn. Furthermore their learning is pushed through the interaction with their environment where agents learn through adding or changing rules or information (Feldman & Yu, 1999, pp. 45-57).
Diplomarbeit, 143 Seiten
Diplomarbeit, 162 Seiten
Diplomarbeit, 227 Seiten
Diplomarbeit, 146 Seiten
Diplomarbeit, 119 Seiten
Diplomarbeit, 99 Seiten
Diplomarbeit, 75 Seiten
Diplomarbeit, 128 Seiten
Diplomarbeit, 111 Seiten
Diplomarbeit, 99 Seiten
Diplomarbeit, 125 Seiten
Diplomarbeit, 227 Seiten
Diplomarbeit, 99 Seiten
Diplomarbeit, 125 Seiten
Der GRIN Verlag hat sich seit 1998 auf die Veröffentlichung akademischer eBooks und Bücher spezialisiert. Der GRIN Verlag steht damit als erstes Unternehmen für User Generated Quality Content. Die Verlagsseiten GRIN.com, Hausarbeiten.de und Diplomarbeiten24 bieten für Hochschullehrer, Absolventen und Studenten die ideale Plattform, wissenschaftliche Texte wie Hausarbeiten, Referate, Bachelorarbeiten, Masterarbeiten, Diplomarbeiten, Dissertationen und wissenschaftliche Aufsätze einem breiten Publikum zu präsentieren.
Kostenfreie Veröffentlichung: Hausarbeit, Bachelorarbeit, Diplomarbeit, Dissertation, Masterarbeit, Interpretation oder Referat jetzt veröffentlichen!