By Jack Krupansky
May 4, 2006 (previous: March 20, 2006 - latest)
Jump directly to PowerPoint presentation
|Note: This concept paper is now frozen since I have accepted a full-time employment offer with Microsoft (unrelated to anything discussed in this paper) and will not be doing any more work on this concept as long as I am employed.|
Jump directly to PowerPoint presentation
Software agent technology has been an active field of research for more than a decade. Although there have been limited applications of the technology for consumer use, deeper success has been achieved in industrial applications. There have been numerous false starts to commercialize agent technology on a wide spread basis, including for consumers, but alas the hype has greatly exceeded both the capabilities of off-the-shelf technology and our own abilities. With every passing year we remain on the cusp of finally breaking out and fielding the kind of technological breakthroughs which will finally make the consumer application of software agent technology a reality.
Please note that the intention of this vision of software agent technology is not to turn the computer into a human-like robot, but simply to enable the computer as a competent assistant in the lives of consumers. The goal is not to pursue artificial intelligence per se, but to incorporate those aspects of AI which relate to agency, where the consumer decides what responsibilities to delegate and is the controlling authority for goals to be pursued by their software agents.
The focus of this vision is not to preview the totality of consumer applications that could be constructed, but to establish a base vision upon which consumer applications can then be envisioned. Alternatively, this vision can be considered as the model for a platform upon which consumer applications can be built.
Central to a new wave of consumer-centric computing is support for interactions that are based on higher-level knowledge rather than simply moving information from one location to another. The goal of using software agent technology is to enable knowledge-based computing.
Simple agent-like features have found their way into many consumer applications, but if you think of software agent technology as a range of mountains, efforts to-date have merely probed into the lower foothills surrounding the mountains.
Existing use of agent-like technology is essentially no more than agent-light, or a relatively thin veneer that only modestly approximates the full potential of software agent technology.
Significant resource allocations will be needed to push much higher in those foothills.
Much basic research is needed to enable significantly higher climbs in the mountains of software agent technology.
Some of the ascent can be made without resorting to deep artificial intelligence (strong AI), but at some stage AI will be needed.
As central as software agents are to this vision of computing, agents are simply the messengers, and the heart and soul of the new messages is higher-level knowledge. But agents have a three-fold mission: 1) move the knowledge around, 2) facilitate the higher-level processing of knowledge, and 3) monitoring and assuring that knowledge is used effectively.
In summary, we need to fund lots of basic research, as well as advanced development labs where the results research can tested without all of the associated market risk that goes with traditional product development.
The intention is not to do research for the sake of research, but to lay the foundation for a quantum leap in improvement of consumer-oriented computing capabilities.
For a brief executive-level overview of the Consumer-Centric Knowledge Web, see the Consumer-Centric Knowledge Web PowerPoint presentation.
Something a bit more sophisticated than what would appear in the popular press, but less dense than what would appear in a refereed technical journal. Approximately three to five pages. Plus three or four diagrams.
Emphasize a few scenarios demonstrating benefit to consumers.
Emphasize focus on a platform for knowledge-based applications rather than specific applications.
Existing applications of software agent technology to consumer applications have been quite modest to date and simply agent-like than truly agent-oriented :
Such agents either perform very simple tasks or require extraordinary effort on the part of the user. There has been little evidence of what can be called intelligence or even deep understanding of user needs.
The basic problem is that attempts at "intelligence" tend to merely mimic human intelligence, and very poorly at that.
Back around 1997, quite a number of rather prominent researchers and entrepreneurs loudly proclaimed that we finally had all the technological elements that could be assembled off the shelf to finally realize the promise of artificial intelligence in the form of software agents or intelligent agents and something called a Knowledge Web. Unfortunately, they were wrong, and very wrong at that.
Take a look at some of the outrageous comments in the announcement for the Agents'97 conference. Some of this stuff will come to pass, eventually, but even eight years later we seem not only no closer, but the objective seems rather more distant.
Another example of the contemporary thinking back in 1996 is the thesis of Björn Hermans entitled "Intelligent Software Agents on the Internet: an inventory of currently offered functionality in the information society & a prediction of (near-)future developments".
Sure, a number of elements are in fact available, but far short the the kind of critical mass that is needed to really bring software agents into the mainstream.
I was in fact one of the people who fell for this hype back in 1997. My interest back then was mobile software agents or the ability for a running program to relocate itself to a different host machine. I gave up on that metaphor rather quickly once I realized the problems and obstacles, but at least it opened my eyes to the true long-term potential for the agent metaphor even if there was no short-term rainbow to the pot of gold.
Even in 2000, Danny Hillis of Thinking Machines fame claimed that "The knowledge web is an idea whose time has come." Here we are in 2006, and still we don't have even a hint of a working Knowledge Web.
The immodest clamor has since died down, with the focus now on agent technology and multi-agent systems, with much less emphasis on intelligence, except as a pure artificial intelligence research topic, where it belongs, for now.
Although high-end corporate information technology applications may seem like a better place to initially focus the application of software agent technology, my view is that corporate needs are more sophisticated, complicated, and demanding. It would seem far better to focus on deploying a simplified vision of user-oriented intelligent software agents and knowledge processing out in the consumer space and then attempt to beef up the technology to meet more stringent corporate demands. This is the model of how the PC and personal computing software evolved, and it seems like the most obvious success model to emulate.
In truth, the PC and its software did not start from scratch, but simply scaled down what was available with mainframes and minicomputers. Similarly, a lot of research and some preliminary commercial work has been done for software agents and knowledge processing. It's not very usable, even by corporate user, but at least there is a good starting point, analogous to the PC.
The overall model, as with any advanced technology, is to first try to apply the technology to high-end government, military, space, and commercial applications, meet limited success there, push a stripped-down version of the technology down to consumers, beef up the technology to the point of an interesting level of consumer acceptance, and then beef up the technology to finally meet the true needs for high-end government, military, space, and commercial applications.
That is the question. Or, more to the point, how can a computer software program best gain insight into what the user wants and needs?
The artificial intelligence guys have something called the BDI model, Beliefs, Desires, and Intentions. That's essentially the totality of what the user has in their heads and what software agents need to know to do an even passable job of satisfying the user.
Yeah, ultimately software agents quite literally need to be able to read the user's mind, but that is still a pipe dream or at least needs to wait for Ray Kurzweil's "singularity".
Users need easy to use tools that allow them to build up a personal database in which they can build up and maintain their own knowledge base of their personal beliefs, desires, and intentions. Once such a knowledgebase is in place, software agents can query it to effectively "read" the user's mind.
Specifically, a consumer-centric knowledge web needs to fix the following problems:
Many applications available today are consumer-oriented, meaning that vendors and organizations have designed their software to appeal to consumers. The vision espoused in this paper is for a quantum-leap forward in computer software applications which will be consumer-centric rather than consumer-oriented. The difference is the question of who is in control, the vendor or the consumer. Many vendors have done a passable job of appealing to the needs of consumers, but that is not even close to being far enough to support the vision of consumer control and knowledge-based applications that we think is feasible.
Consumer-oriented approaches are acts of reaching out to and controlling consumers by vendors, but consumer-centric approaches focus on consumers being in control or both the game and their own destiny.
Knowledge-based computing focuses on aligning information processing as close as possible to the level of knowledge which the consumer works with, allowing the consumer to express themselves to the computer as closely as possible to how they would express themselves to other people. Rather the immediately translating the consumer's knowledge into a low-level information format, the goal is to keep the knowledge in a higher-level knowledge-oriented form as often as possible.
As you are reading some of this you may hear yourself and others asking a very important question: Doesn't the Semantic Web do all of this already? In short: No. If you fully digest the entire vision presented here and compare it to a full digesting of the reality of the Semantic Web (as espoused in the May 2001 article in the Scientific American), you will see that the Semantic Web comes up far short. The Semantic Web is a significant leap forward, but simply is not about knowledge-based computing, consumer or otherwise. The Semantic Web is about information-based computing, and maybe someday, after significant research, be extended to grasp real and meaningful knowledge, but for today and the next few years the Semantic Web is primarily about representing traditional IT-style information in ways that IT-style computer programs can process it, as opposed to the old Web in which information was displayed as raw text and raw graphics, with no clues to computer programs as to the structured information that was being presented on an HTML web page.
Put simply, the vast bulk of the information represented in the Semantic Web is hardly more than the level of information that would be stored in an SQL-style IT database. In fact, much of the information on the Semantic Web actually is sourced from SQL-style IT databases.
Much of the so-called knowledge that is supported by the current Semantic Web is still only a representation of knowledge as an aggregated knowledge artifact (e.g., a block of text in a natural language) rather than drilling down and representing details of true, human meaning. For example blogs in the form of XML-based web feeds have a significant amount of machine-processable information, and that is indeed a significant technological advance, but the title and body of the blog post are still uninterpreted blocks of text in a natural language, or maybe not, as the case might be.
A portion of the Semantic Web relates to services performed on the Internet, and is referred to as Semantic Web Services (SWS). SWS is a significant step forward compared to traditional communications with server-based applications and Web-based applications, but still works at the level of information or even structured information of the traditional IT-style, and doesn't even come close to getting into meaningful knowledge. SWS also has a rather simplistic approach to "agents", and doesn't even begin to put a dent in what it means to be or support an intelligent agent, let alone vast swarms of agents with emergent behavior, and how mere mortal users might convey human knowledge to agents and how agent can convey machine knowledge to humans.
The transformation between human knowledge and machine knowledge is a vast, unresolved research problem. At present, no relatively simple mechanical solution easily implemented with off-the-shelf technology is capable of readily transforming to and from human knowledge. The vision of this paper is that tools and techniques can be developed to facilitate the knowledge transformation process, but that much research is required. And the prospect of vast armies of knowledge engineers standing by to manually encode human knowledge into XML/RDF documents is currently a non-starter. Constructing ontologies for even very simple domains is still quite tedious, very error-prone, and incomprehensible to mere mortals.
Proponents of the Semantic Web pay lip service to the importance of ontology, or how one goes about completely specifying any domain of knowledge. As the Scientific American article refers to ontology, "Artificial-intelligence and Web researchers have co-opted the term for their own jargon, and for them an ontology is a document or file that formally defines the relations among terms. The most typical kind of ontology for the Web has a taxonomy and a set of inference rules." That's hardly sufficient for representing hard-core, meaningful knowledge that humans, users, even consumers can relate to. The article neglected to mention that AI and Web researchers have "co-opted" the term taxonomy as well. In fact, their usage of the term taxonomy belies the truth about so-called ontologies for the Semantic Web: they're hardly more than data declarations and schemas and business process rules in the traditional IT sense and are essentially discussed as such in the article. To represent meaningful knowledge of the sort relevant to the interests of consumers, we'll need techniques a little more powerful and more flexible and more easer to use than simple rules, business rules, or even so-called inference rules.
Now, it may turn out that our vision of a Consumer-Centric Knowledge Web can be built on top of the Semantic Web, and it would in fact be wonderful if the effort to achieve our vision is greatly reducing by the existing Semantic Web technologies, but that is not a requirement, nor is it a given, nor is it even a likelihood. Far too much research remains to be done to prejudge the extent to which the Semantic Web will be reusable enough to support a full-blown, meaningful knowledge web.
A more elaborate argument can be made about the differences between the current vision of the Semantic Web and our vision of a Consumer-Centric Knowledge Web, but the main point remains that if you read any of this and think that "all of that is already done in the current Semantic Web", then I would suggest that you go back and read more carefully and challenge your own assumptions.
To summarize, the Semantic Web does indeed have a bright and prosperous future, but as presently envisioned, it won't achieve the goals espoused by the vision presented here for a Consumer-Centric Knowledge Web.
What is a software agent? That question is a matter of great debate, but the essence is that a software agent is a computer program which possesses the characteristic of agency, that it is acting on behalf of another entity (i.e., the consumer) in pursuit of goals specified or controlled by that other entity (the consumer).
The key qualities are that software agents are performing tasks and working towards goals for the consumer, without the need for the consumer to be involved and worried about every step of the way. This implies a degree of knowledge about the consumer and intelligence about how to work on the consumer's behalf. It is necessary but not sufficient to know what consumers in general want, but also to deeply comprehend what each particular consumer wants.
The long-term goal is that software agents will take on more of the attributes that we associate with intelligence. In the interim, so-called intelligent agents will evolve gradually towards a sense of human-like intelligence, but remain more focused for now on more of a mechanical, drone-like mode of operation that at best mimics human intelligence. Even in the longer run, intelligent agents will converge on what should be called computational intelligence or machine intelligence that will continue to fall short of true human-level intelligence in many ways even as it surpasses human intelligence in other ways.
Researchers in the field of Artificial Intelligence (AI) have long viewed multi-agent systems (MAS) as a very promising model for mimicking bother the human mind and communities of autonomous individuals. Traditional multi-agent systems have been closed and quite limited in scope, but gradually they have been becoming more open and flexible. Many of the approaches to the interaction of software agents on the Internet have been based on research in multi-agent systems. Much more research is needed, but at least some of the foundation has already been laid.
The biggest open research topics relate to how to apply MAS concepts to free-willed (and free-wheeling) consumers as opposed to more mechanical and drone-like industrial applications.
Knowledge-based software agent technology blends the deep richness of a knowledgebase and deep semantic meaning with the raw power of software agent technology. It is the combination of both that provides the breadth and depth needed to enable computer software to truly understand and provide support for what the consumer is really trying to do.
To simplify the terminology a little, this overall white paper can be thought of as referring to a consumer agent vision or a consumer knowledge agent vision. The term consumer agent should be good enough, but there is enough ambiguity that we should settle on the term consumer knowledge agent. The latter seems to capture all three essential ingredients of the vision of this paper: consumers, their knowledge, and using software agents to facilitate the growth and use of that knowledge. As a technicality, the full term is consumer knowledge-based software agent, but can also be referred to as a consumer knowledge-based agent.
The two big categories of support that software agents can provide for consumers are coping and facilitating. Consumers either have an idea or goal that they are interested in pursuing and need assistance in facilitating that idea or goal, or they are confronted with a problem or task or issue that they are not particularly interested in pursuing, but they have no real choice, so they need help coping with the problem, task, or issue. Consumers need a lot of support, and software agent technology seems ideally capable of providing a significant amount of it.
To date, nobody has come up with a technology that scales up as well as webs of interconnected software agents. They are more flexible. They can automatically adapt to constant changes in a dynamic networking environment. They can evolve and support applications that are evolving. Hand-coding distributed applications is simply too tedious, too inflexible, and too error-prone for large-scale distributed applications. System administration for such large-scale applications and databases is simply beyond the capabilities of human system administrators. Large scale distributed applications will become too important to entrust to traditional, ad-hoc, error-prone approaches to network design.
Today the rage is about the transition from software to services, particularly Web Services. The vision espoused in this paper concerns a future evolution of the same trend, but where "traditional" Web Service-based applications will give way to applications based on software agent technology. Web Services will continue to play an important role, but the vast bulk of the activity will be based on the interactions of autonomous software agents, with Web Services being background resources available for use by software agents.
A tremendous amount of research has been performed on the topic of artificial intelligence (AI) over the past 50 years. Software agent technology draws on this body of research, but much research remains to be pursued. Although AI classically focuses on the holy grail of human-like intelligence, it is more sensible to work in the direction of computational intelligence or machine intelligence which aims to at best mimic human intelligence as feasible, but to go far beyond limited human intelligence in as many areas as possible.
Even when the best AI techniques cannot begin to approach human intelligence, there is great promise to the concept of intelligence augmentation, where the aim is to blend a hybrid of human and machine intelligence, with each side contributing its best efforts. With software agent technology we're looking at leveraging the contribution of the human consumer with the "intelligent" efforts of a potentially very large number of software agents, and coupling that with the efforts of other consumers and their software agents as well.
Artificial artificial intelligence (AAI) refers to intelligence augmentation where human beings volunteer to perform tasks at the behest of computer software, especially in situations where true artificial intelligence simply isn't up to the task. This capability further extends the power of software agent technology, and software agents can be used to facilitate AAI itself. The consumer won't even be aware (in general) that any humans are in the loop.
Consistent with the thinking behind the old adage "two heads are better than one", software agents have the potential to act as intermediaries and facilitators between consumers so that a group of consumers can interact and act as if they had a much larger multi-mind or group mind. The leveraging that software agents can provide could lead to a dramatic boost in productivity and innovation and a host of social benefits.
One of the network effects of consumer collaboration is that collectively a group of consumers can appear to have a level of intelligence greater than any of the individuals of the group. Again, software agent technology fulfills a major role in the collaboration process and facilitates the communication of knowledge among the members of the group. Further, agents can collect and process knowledge on the behalf of the consumers, according to the interests of the consumers in a far more efficient manner than the consumers themselves. By tapping into the shared knowledge of the group, the software agents acting on behalf of the group members can effect a collective intelligence that benefits the group as a whole, and the individual consumers as well.
Collective thought can be a powerful tool both for the members of the community doing the thinking, but also for the community overall. Organizing collective thought in a consumer knowledge web would be a good first step at leveraging all of that collective thought.
Collective thought is actually quite tedious if attempted manually (e.g., exchanging and reading documents), but can be greatly facilitated using software agent technology to do much of the collection, storage, correlation, and more efficient distribution of the knowledge that each member of the group needs to come up to speed with the thinking of the full group
The noosphere is the composite of all interacting minds. The concepts of multi-mind and group mind would be specific subsets of the overall, global noosphere.
Although the term neurosphere can be treated as synonymous with noosphere in some contexts, it really includes the use of the Internet as enabling the group mind. The term has been popularized by Donald Dulchinos in his book "Neurosphere: The Convergence of Evolution, Group Mind, and the Internet".
The knowledgesphere is analogous to the noosphere, but simply refers to the total knowledge within any particular environment. So, we could speak of the Web knowledgesphere, the total knowledge on the Web, or the knowledgesphere of a particular group of individuals. In the context of this paper, "the" knowledgesphere is the consumer knowledgesphere which is the total knowledge accessible by the software agents which are working on the behalf of consumers.
Even with sophisticated search engines, there is already far too much information out on the web for the average consumer to easily find the information that best meets their needs. Software agent technology coupled with a comprehensive knowledgebase relating to the interests and behavior of the consumer will provide a rich level of context to greatly facilitate navigation through the haystack to quickly find the needles of interest to the consumer.
As real-world situations get more complex, even simple reasoning can become quite difficult. The vast knowledge embodied in the consumer-centric knowledge web, coupled with software agent technology can render assistance, helping to drill down and reach out to simplify reasoning in even very complex scenarios. Often, the problem is simply that the consumer doesn't have the appropriate knowledge immediately at their finger-tips, or doesn't have knowledge of paths or chains of reasoning that can help them or guide them to their goals. Much research is required, but the potential benefits are huge.
There are many situations where the consumer is simply too busy or distressed or finds it inconvenient or uncomfortable to take an action by themselves and may elect to have a proxy act on their behalf. Software agents can be a very appropriate choice for supporting the concept of a proxy, giving the user control without the burden of the actual actions. The important thing is that the software agent must have access to enough knowledge about the consumer and their interests so that the agent can act appropriately without detailed, tedious, and error-prone instruction from the consumer.
As information technology has progressed and evolved, information has gotten more refined, but more fragmented and exceedingly more detailed specializations have emerged. This information fragmentation and information specialization has worked to the detriment of most consumers. Sure, more choices have become available, but navigating and discovering and exploiting those choices has gotten far more difficult. This is a prime reason why we need to make the leap from information to knowledge, and a prime reason why we need to exploit the power of software agent technology.
Calm technology has the ability to make itself available to consumers and work on their behalf without significantly disturbing their sense of calm. A side effect is that more technologies can be exploited by the consumer without dragging them down and making them feel that they are overburdened. This needs to be a key criteria for new technologies to be introduced into the consumer domain. Software agent technology, especially the capability of executing in an autonomous manner without intervention or direct control of the consumer, is almost inherently a calm technology, if designed and deployed properly.
Knowledge-based software agent technology can radically improve the degree of automation of the consumer's personal computer (or other access device. The effect is to radically simplify computer vocabulary needed by consumers. Much of the jargon can be eliminated from the consumer's vocabulary. No longer will consumers need to fret over install, setup, configure, settings, options, tuning, troubleshooting, tech support, training, etc.
A central requirement for consumer applications is that the consumer is in control, not some vendor or service provider, but the consumer themselves.
Software agents add the twist that since the software agents themselves are technically "in control" at any moment, it is sufficient that the consumer is the controlling authority.
Current online networks tend to be vendor-centric or server-centric or net-centric, but software agent technology enables the consumer to be placed at the center of attention. This consumer-centric approach simultaneously serves the needs of the consumer, and also enables vendors to more effectively interact with consumers.
Much of what a consumer will do which any computer software is driven by their interests, suggesting that software agents can help consumers a lot by providing rich support for consumer interests, whether that be collecting consumer interests, organizing them, searching for them, matching with the interests of other consumers, or whatever, the point is that consumer interests need to be a key aspect of the Consumer-Centric Knowledge Web.
Software agent technology can facilitate how consumers conceptualize, think about, and express their interests. One of the big problems today is that computer software applications have few clues about the real interests of the consumer, and hence can offer rather little assistance.
The important aspect of a software agent is that it is an intermediary, acting on resources and acting with other entities in order to achieve goals that were set by the controlling entity or controlling authority, the principal of the agent or the agent principal.
The entities that a software agent interacts with may be either principals acting on their own behalf or other software agents acting on behalf of their principals.
In any case, the heart and soul of software agency is that users or consumers are in need of services that are available, but they benefit greatly through the use of intermediaries, agents, which facilitate interactions.
Just as important as the software agents themselves are the environments in which the agents operate, analogous to vehicles and roads and highways.
We presume that the Internet and the Web will be the primary environments of interest for consumer software agents. But the consumer's personal computer or access device is itself a full environment. A P2P community is a distinct environment. Any overlay network could be a distinct environment in which software agents can operate.
Mobile phones, Bluetooth-accessible devices, and even freely-roaming robots can also be parts of environments for software agents.
Environments provide resources and services that software agents can utilize in pursuit of goals.
Environments present opportunities for software agents, but they can also present threats in the form of malicious agents.
An overlay network is a dynamic collection of network nodes that act as a subset of the entire collection of nodes in the network. A file-sharing network is an example of an overlay network. Overlay networks are an excellent infrastructure for supporting dynamic online communities, as well as the software agents which support such online communities.
Not to be confused with web services, the Semantic Web offers a guiding philosophy of a rich network of semantic data that can be processed in an automated manner by software comparable to software agent technology. Every consumer and every product and service vendor could have richly-hyperlinked semantic, machine-comprehensible information at the level associated with knowledge that can enable software agents to offer services far beyond what any single vendor or tightly-knit collection of vendors might offer.
The semantic web is the ocean and continents through which and across which software agents will navigate in pursuit of satisfying the needs, interests, goals, and ideals of the consumers who control those agents.
A key aspect of the semantic web is that software agents will be able to continuously scan the dynamically varying content of the semantic web and continuously computing patterns than can be used by software agents to offer semantic services to consumers and vendors alike.
A rich semantic web is quite valuable, but very difficult to produce if constructed manually. Rather, we need tools which will implicitly add knowledge to the semantic web as it becomes known by intelligent software agents as those agents perform tasks on behalf of consumers. Each action or choice carried out by a software agent for a consumer makes additional knowledge available to be added to the semantic web. This implicit semantic web can quickly grow to be orders of magnitude larger than any manually constructed semantic network.
The implicit semantic web will be filled with structured representations of the knowledge and behavior of the the many consumers and vendors who participate in the semantic web.
To be useful, knowledge must be available in both its detailed form and its abstracted form. The implicit semantic web would support both.
By dramatically increasing the size of the available knowledgebase, finer and broader and deeper patterns will become available to the software agents that provide applications to consumers and vendors alike.
Not to be confused with the Semantic Web, the concept of Web Services is a more powerful and open approach for vendors to offer services on the Internet. Enough thought has been given to the design of the technical standards that underpin Web Services so that they are flexible enough to support a global networking of services that has the potential to result in more dramatic network effects and economies of scale. Although software agents will tend to interact and communicate among themselves, Web Services provides a rich and flexible interface that will enable software agents to access more traditional forms of services offered by traditional vendors.
Over time, Web Services themselves will evolve more towards the agent-oriented approach to computing. Either way, software agent technology will shield and insulate consumers from the idiosyncrasies of the underlying technology.
A knowledge web is a portion of the Semantic Web which focuses on knowledge. A knowledge web is far more than a static collection of encoded knowledge. Knowledge is created constantly, including through processes and services that are active at any moment. Software agents will be key participants in both supporting knowledge webs, and the generation of new knowledge. A knowledge web should be thought of as not simply a repository of information, but a platform for knowledge-based applications.
A consumer-centric knowledge web is a knowledge web which focuses on knowledge that is both of interest to consumers and controlled by consumers. There is certainly a substantial gray area between all knowledge and consumer-centric knowledge, but it is the knowledge-oriented processes that are important, including a bias towards the interests of consumer. A consumer knowledge web is a platform for consumer-centric knowledge-based applications.
One of the ongoing debates is over gathering knowledge through data mining (mined knowledge) versus explicitly-constructed knowledge. Specifically, should we have to wait for everyone to convert to explicit knowledge structures represented as the Semantic Web, or can sufficient knowledge structures be automatically generated as a result of text mining, data mining, and even knowledge mining. A hybrid solution is likely, possibly alternating between mining and hand-tuning to refine the knowledge, but much research and experimentation is needed.
Grid computing has the potential to enable the sharing of computing power on a global basis, but does not provide users with any new functions per se. Still, the availability of vaster greater computing power could very well enable new and advanced functions, particularly related to knowledge management and machine intelligence. How to effectively exploit that computing power remains an open question for research, but software agent technology is a leading candidate for both enabling access to that computing power as well as using it for consumer-level applications.
The semantic grid layers the concepts of the Semantic Web on top of raw grid computing. The massive volumes and vast diversity of computing resources available on a semantic grid literally require software agent technology to find and match the relevant computing resources. Software agent technology also permits the aggregation of semantic grid resources and services to provide higher-level resources and services that enable even higher-level consumer applications.
Wide area networks such as the ARPANET and the Internet evolved from a realization that centralized networks have too many problems to scale up to meet the capacity and reliability needs of large-scale computing communities. Although the Internet and Web as networks themselves are decentralized or distributed, far too many applications and services are far too centralized. Each organization wishing to put up an application on the Internet or Web has to explicitly cope with how to scale up their own computing infrastructure as their own computing audience grows. Redundancy, caching, and mirroring are all techniques that have evolved to cope with the difficulties caused by centralization of network applications. All of this highlights the two most important facts of networking: centralized is bad and decentralized or distributed is good. The application corollary is true as well: centralized applications are bad and decentralized or distributed applications are good. Unfortunately, much of the infrastructure and tools we have available to us today are focused on development and deployment of small or centralized applications or semi-decentralized applications in a tedious, expensive, and error-prone manner. So, by focusing on distributed applications we move to a world to eliminates many of the problems that are inherent in decentralized or manually decentralized applications. Put simply, innovators of new consumer applications should not have to waste any of their time, energy, or resources on the problems of scaling and reliability.
All of the arguments against centralized applications and for distributed applications apply to databases as well, especially since they tend to be the heart of many applications. So, centralized databases are bad and decentralized or distributed databases are good. Unfortunately, management of distributed data can be even harder than distributed code. Actually, that's not really true since both are very difficult to manage and we only imagine that we know how to properly manage distributed code.
The important concept for a distributed database is that the various data elements are not under the dictatorial control of a central database administrator. Instead, intelligent software agents monitor and accommodate differences in approach to data modeling throughout the network or web that comprises any consumer application. Further, data is shared among applications and shared among a potentially very large number of applications. Much research is needed in this area.
The current rage is the push for network-centric applications, but that places too much emphasis on the network infrastructure rather than the knowledge itself. Rather, we need global knowledge-centric applications, where the focus is on the deeper and global semantic knowledge itself.
The network that really matters is not the physical network nodes and connections, or even the logical domain names, but the network of consumer-centric knowledge.
Another current rage is to offer software as a service (SaaS), with a focus on maintaining the core software on more centralized servers rather than on the servers of each customer, and that may or may not make sense for stodgy information technology (IT) shops, but only has limited benefits for consumers. Rather, consumers would benefit more greatly from offering software as agents, where there are no large monolithic applications running on centralized servers, but each consumer has any number of software agents which collaborate with other software agents to pursue goals on behalf of the consumer.
Ant colonies exhibit a significant level of problem solving ability despite the limited capabilities of the individual ants. The ant paradigm has great potential as a model for how software agents can be utilized to collaborate on pursuing significant goals on the behalf of consumers.
Software agents as ants can be deployed for individual consumers or jointly to support collaboration among consumers.
Related to the ant paradigm, significant research has focused on modeling the structure of software agent systems on swarms of the types found in the biological world for attacking large, complex, and difficult to analyze problems. Even without any centralized control or supervision, swarms frequently exhibit apparently intelligent behavior, called swarm intelligence. The trick is to design the individual agents and their methods of interaction so that desirable swarm behavior occurs. This is too complex for most mere mortals. Once again, software agents are ideal for developing, training, deploying, and monitoring swarms of software agents that are running on behalf of the interests of consumers. Despite the research that has been done, much more research is needed.
Much of the work on software agent technology has focused on the treatment of agents as if they were animals in an environment. In the biological world we also have plants, forces, and chemical agents. Analogous entities and mechanisms may have great value in the environments populated by software agents. For example, many web services in fact act as if they were plants, producing "crops" which can be "harvested". Forces may simply be constraints in the computational environment. The analogy to chemical agents in a computational environment are not yet clear, but is worth considering. The bottom line is that we want to assure that the computational environments populated by software agents is rich enough and robust enough to support a software agent ecology that is extremely useful from the perspective of users, namely consumers.
The current Web and the envisioned Semantic Web still maintain centralized application servers and vendors as the focal point of the web, with the users outside looking in. The vision espoused here is of a Consumer Knowledge Web where the focal point is the total knowledge base of all consumers and the consumer-oriented software agents which pursue consumer-driven goals. Vendors are essentially "outside" and looking in.
It is not clear what capabilities would be available in the initial version of the envisioned Consumer Knowledge Web, call it Consumer Knowledge Web 1.0, but they would evolve over time. It may take a dozen or hundred or even more revisions of the supporting infrastructure to achieve the vision of a knowledge web focused on the consumer.
In contrast to the Consumer Web, which is the portion of the Web which focuses on the interests of consumers, the Consumer Knowledge Web would be the portion of the Semantic Web or Knowledge Web which focuses on the interests of consumers. While the Consumer Web is driven by user navigation, the Consumer Knowledge Web is driven by the activity of software agents acting on behalf of the consumer.
The Consumer Web is based on the presentation of information which has little semantic content (e.g., text, numbers, images), whereas the Consumer Knowledge Web is based on semantically-rich knowledge.
Maybe the envisioned web should really be called Consumer/Agent Knowledge Web to highlight the centrality of software agent technology to achieving the vision. It is not simply that software agents are utilized in the implementation, but that each consumer will need to conceptualize the Consumer/Agent Knowledge Web as a partnership in which the software agents working on behalf of the consumer are essentially part of the consumer's mind.
It would require tremendous ingenuity, discipline, and effort to hand-code the type of sophisticated consumer software agents that this paper envisions. Instead, it is envisioned that much of that common effort be factored out of each consumer software agent and be embodied in a wide range of agent-oriented toolkits, application frameworks, middleware subsystems, and other platform-related software that collectively provides a very rich infrastructure that supports powerful consumer software agents.
Once in place, the agent-oriented infrastructure will facilitate the rapid development and deployment of consumer software agents with much less effort, but a much higher probability that the agents will operate as expected.
A big part of the infrastructure is the autonomic monitoring capability which detects and automatically recovers from abnormal behavior by agents, and also automatically initiates the execution of logic needed to support declarative software agent capabilities.
Traditional software has been based on an algorithm-oriented computing model derived from the computer science concepts related to Turing machines. That was fine for relatively discrete and monolithic software, but doesn't provide any theoretical support for highly distributed computing. More recent research has focused on interaction machines, with the emphasis on how the black boxes interact rather than what's in the individual black boxes. Going further, the concept of an agent interaction machine has the promise to support even more highly interactive software systems. More research is needed, and more interaction-based software infrastructure is needed.
Applications based on software agent technology can be designed, implemented, deployed, and evolved in a myriad of ways that are either difficult, tedious, or outright impossible for traditional, monolithic applications. In fact, the evolution of software agent-based applications can best be described as organic. Organic application development is based on very flexible interface that are goal-oriented rather than task-oriented.
One example of an organic application development model is the concept of a mashup or web services mash-up which relies very heavily on accessing and composing the services of existing applications and Web services.
Although we routinely speak of software agents as operating autonomously, or being autonomous agents, what we really mean is that the user can use the software agent in a "fire and forget" mode, but the existence of the software agent is known to the user. We can also contemplate software agents which are brought into existence by some entity other than the user and that operate without the user's knowledge. We can refer to this mode of operation as autonomic operation, analogous to the autonomic nervous system in biology. This concept has already taken root to some degree in the form of autonomic computing, although that tends to refer to the underlying operating system and middleware than to higher-level applications.
In essence an autonomic software agent implies indirect agency. User U initiates software agent S which initiates software agent T, implies that T is operating autonomically relative to U. There is still a sense that T is an agent of U, but U may not even be aware of T's existence.
The benefit of autonomic agents is leveraging, in that the user can gain the benefit of the operation of far more software agents than their conscious mind can deal with.
While autonomic operation is the desired goal, many consumer goals are greatly facilitated with the much simpler asynchronous operation which means that the consumer and application software can operate independently for a while without direct supervision of the consumer, but the consumer remains aware that an asynchronous operation either remains underway or was at least initiated. With autonomic operation, the consumer is not even aware that an operation is being performed on their behalf. Email servers are an example of asynchronous operation, with consumers able to send and receive email without having to synchronize themselves such as is needed for a normal telephone conversation. A typical email alert is another form of asynchronous operation.
Even simple asynchronous operation is difficult enough to program. We need better tools, better paradigms, better development languages, and better software infrastructure to support asynchronous operation. Even then, autonomic operation is yet another mountain to be climbed.
Today, consumers have no choice but to know about and work with monolithic, large programs or applications. Software agent technology and robust and distributed knowledge infrastructure will change all of that. The vast bulk of code will be distributed and shared so that each user-visible function will be very small and atomic. There will be no need for any consumer to think about concepts like program or application. Actually, the term application will still be relevant, but refer to what the consumer is trying to do, or the domain that the consumer is working in, rather than how the use is implemented. In other words, program and application are implementation artifacts that will no longer be needed by consumers.
A macro software agent is a software agent that works on goals at a level that is of direct interest to a user.
A micro software agent is a software agent that works on a subset of the goals or sub-goals that have been delegated to it by a macro software agent or possibly even by a non-agent computer software application.
Consumers stand to benefit from both forms of software agents. Macro software agents tend to work in terms that the user can comprehend, and can appear to act as assistants for the consumer. Micro software agents enable macro software agents to split the work into pieces that can be delegated in such a way as to take advantage of the inherent parallelism and distributed processing of the Internet, the Web, and the Grid.
Its tempting to think of macro and micro software agents as if they were "big" agents and "little" agents, but size is not the issue. For example, a macro software agent might run within the consumer's handheld device and delegate to micro software agents which are very large computer programs running on servers or desktop computers. In some cases micro software agents will be rather small in size, but that is not a requirement.
One interesting configuration is that a network of users each has their macro software agents on their handheld devices which delegate goals to micro software agents which then interact which the micro software agents of other users.
A software agent has limited utility by itself, but interacting software agents have much greater utility as the number of interacting software agents rises. This is called network effects. The classic example is a fax machine, whose utility is derived in large part from the population of fax machines with which your fax machine may communicate.
Similarly, a consumer can benefit greatly if their software agents are able to interact with and learn from the software agents of other consumers.
People are already waking up to the potential for new tools to allow consumers to interact in a more "social" manner. Social computing endeavors to provide a social context for our computing activities, centered on users and their interactions. Software agent technology has real potential to help exploit the distributed, massively parallel nature of modern computer networks given the distributed nature of such social interactions.
Human computing focuses on dramatically shifting the balance away from "working with the computer" on its terms, towards the computing working for us on our terms. Software agent technology has the potential of greatly facilitating this shift, primarily by being driven by the evolving knowledgebase that agents will maintain for the consumer. Rather than force the user to deal with the artifacts of traditional computing, software agents will have an increasing ability to comprehend and work with the human artifacts of the consumer. This is more than simply about the user interface, focusing a lot of attention on the knowledgebase of the consumer.
Even today, ad-hoc groups form on the Internet and Web, but there is minimal support for them overall. Software agent technology can provide the infrastructure support to enable informal groups, called tribes, to come into existence and flourish. Agents can also assist tribes in codifying and promoting group social values. And all of this is possible without the need for the group to invest resources and effort in building the kind of software infrastructure that traditionally would be required for such intensive social interaction.
A consumer's software agents can dynamically seek out other consumers with whom the consumer might have a common cause, such as taking a position on an issue. The collection of consumers who are likeminded can be thought of as a dynamic coalition.
Polls can be taken, not be explicitly surveying consumers, but by querying the software agents that a consumer may have authorized to disclose various levels of information about the consumer's views.
Dynamic coalitions come into existence and vanish as rapidly as consumers' views evolve.
A consumer can also indirectly join a coalition, by delegating their own position on an issue or whole categories of issues to some other consumer or authority or organization whom they trust. They can take back that delegation at any time. They can also authorize such a categorical delegation with exceptions, such as where they generally agree with the delegatee, but override selected or sub-categorical positions.
There is no vendor or explicit service needed to initiate a dynamic coalition, but simply the consumer expressing their views and authorizing their software agents to selectively make that knowledge available.
Virtual communities exist today, but usually they are server-based. Similar to dynamic coalitions, software agent technology can facilitate and support the formation and prosperity of virtual communities.
As an example, software agents acting on behalf of the consumer can monitor and filter activity in virtual communities and alert the consumer when specified interests are being referenced. The consumer may also authorize software agents to act on their behalf in designated virtual communities.
There is a natural tendency for groups of agitated individuals to congregate in mobs, potentially resulting in violent or at least disruptive behavior. Further, the advent of personal communications technologies have resulted in the evolution of smart mobs. The Web frequently exhibits similar forms of behavior, especially with blogs or blog mobs. The real challenge is not to eliminate mobs or crowds or even to try to rein them in, but to enable forms of communication and interaction which make it less likely for smart mobs to be vehicles for destructive impact on society, but rather to make them an option for constructive contribution to society. One technical problem is that it is difficult to express a large body of knowledge in a simple conversation or short message. Software agent technology can offer a technical solution by enabling the exchange of significant amounts of knowledge between the software agents which represent the individuals in a smart mob. The software agents for each individual can then alert the individual as to specific bits of knowledge that are most relevant to the situation at hand. The concept is simple, but much research is required to make it practical.
The politics of democracy and the political process itself is quite tricky. Still, software agent technology can help to mediate and facilitate various aspects of the political process. Much thinking, research, and difficult decisions are needed before online democracy can become a full-blown reality.
Consumers have a critical need to determine whether to trust information and services, or the extent of their trust. Assessment of reputation is part of that process. Consumers are also a source for information about reputation. Software agent technology has a role to play in monitoring, evaluating, and propagating information related to reputation and trust. This is yet another area where significant research is needed.
Ethics and encouraging acceptable behavior is an important quality of any consumer environment. Deontic logic is an approach to formalizing thinking about "ought" or regulative behavior. Software agent technology has a role to play, whether by playing cop or simply monitoring activities and alerting consumers to suspicious activity. Software agents can also assist groups and communities in formulating and managing their own systems of ethics.
Provenance relates to keeping tract of the source and history for knowledge, including facts and assertions. Provenance is useful for both the consumer, either for curiosity or to assess reliability and trust, or for software agents which may make decisions about knowledge based in part on its provenance.
The combination of a vast knowledgebase and the activity of intelligent software agents may lead to the need to consider the psychological aspects of the Consumer Knowledge Web. Whether consumers consider the CKW to be intelligent is one thing, but at a minimum it is likely that the CKW will have at least some psychological impact on consumers. We certainly don't want consumers to feel overwhelmed by the knowledge or the software agents within the CKW, but how to minimize any negative psychological consequences remains an open research question.
Consumers have a fair amount of experience dealing with traditional information such as text, numbers, images, and media, but few consumers have had any experience interacting with a computer in terms of knowledge. It is difficult to predict how consumers will initially react, or how their attitudes will evolve towards knowledge as a form of information and as a media. Some consumers will relish the thought of teaching or feeding knowledge into the computer, while others may recoil with horror. Much research is needed.
Consumers have a fair amount of experience interacting with the computers as an information appliance, but since few computer applications exhibit much in the way of intelligent behavior, much needs to be learned about how consumers will feel about interacting with the intelligent computational entities that we call software agents. Some consumers will find it a satisfying experience, some will find it uncomfortable, and some may even find it worrisome, belittling, dehumanizing, or even threatening. Much research is needed. The advent of a true knowledge appliance will be an eye-opening experience for most consumers.
The combination of a vast knowledgebase and intelligent software agents that are constantly operating within that knowledgebase is a prospect that most consumers have never had to consider, so predicting consumer attitudes towards the combination of the two in the Consumer Knowledge Web is an uncertain proposition. Prototyping of interfaces and simulations of the CKW using real humans on the other side of the interfaces may help, but the sheer complexity of the types of potential interactions precludes full simulation in advance of initial deployment.
On demand is one of the popular mantras for services these days, but a more dramatic approach to empowering consumers in the future is the concept of never need to demand, which is enabled using software agent technology that is always anticipating user needs. Yes, we do need to support on-demand knowledge, but never-need-to-demand knowledge is what we really want.
Traditional software is more focused on the destination or end-point of the task and what it takes to get there than on providing richer support for the journey itself. Value-oriented software agents can offer the consumer with more satisfying support oriented towards the open-ended direction the consumer is interested in exploring.
One of the great lingering technical problems for consumers to where and how to store their data, and the problem only gets far worse as we seek to lean more on digital technology in the years and decades ahead and seek to store information and knowledge that is of ever-higher value.
Storing consumer knowledge on a hard-drive, flash-drive, CD, DVD, remote server, P2P network, etc. is not the answer. New approaches are needed. The P2P network approach shows some hope, but is far too primitive for robust storage of data whose value may span many decades.
A subset of the problem is that a lot of knowledge will reside within the internal state of the many software agents that pursue the needs and interests of each consumer. Mechanisms are needed to give that knowledge persistence.
Even where vendors offer remote servers, there remain issues of geographical diversity, vendor longevity, and simply the preference of consumers to not be locked into a single vendor.
Plenty of research is needed.
I have sketched out a preliminary proposal for a subset of this problem, called a Distributed Virtual Personal Computer or DVPC, but even that proposal falls far short of the full needs for persistent storage of consumer knowledge.
Some information about a consumer may be stored without their knowledge or awareness. Such indirect personal information is common in traditional information systems, as well as the Web and even Web 2.0 (e.g., web cookies), but the goal should be to eliminate all such information. Instead, information about consumers should only be stored in forms that the consumer has complete control over, including software agents that answer only to the interests of the consumer. Rather than directly controlling a consumer's personal information, the goal is to implicitly provide access to the effects of such information by interacting with the software agents that are under the control of the consumer. And of course the consumer controls who can access even their software agents.
Even today, it is enormously difficult for consumers to keep track of all the information on their computers and other digital devices. The magnitude of the problem will only get worse as devices and applications evolve over the coming years. Software agent technology can address this problem since individual software agents are designed to thrive in complex knowledge webs and manage large volumes of information. The consumer stays focused on setting goals, and the software agents focus on seeking out the knowledge needed to meet those goals.
Once we place software agents in charge of managing knowledge, the consumer no longer needs to waste any energy "shuffling virtual paper" to satisfy their needs and interests.
A simple relational database in insufficient for organizing consumer knowledge. An ontology is a description of all that exists for the domain that it covers. A taxonomy is a hierarchical categorization of the entities of a domain, such as in biology. Tagging is a simple approach by which users themselves associate consumer-defined attribute names with entities that they care about. A tagsonomy or folksonomy is a taxonomy-like organization of entities that is derived from the tagging that is performed by a collection of users. Even modest-size ontologies, taxonomies, tagsonomies, and folksonomies can quickly become far too voluminous and cumbersome for people to comprehend and navigate, let alone use effectively. Software agent technology can use contextual information to provide consumers with personalized views of such categorizations of knowledge. More than simply filtering the data, software agents can interact with the software agents of other consumers and collaboratively work with the structure of consumer knowledge.
Taxonomies are actually very complex knowledge structures. They may seem simple, and initial implementations of them have been somewhat simple, they require sophisticated tools and software infrastructure to work well. Implementations such as the Yahoo directory, the Google directory and the Open Directory Project (ODP) work to some extent, but fail for most uses. The extent of that failure is illustrated by the popularity of text search engines such as Google compared to the Yahoo directory. One of the primary cause of the failure is that there are not sufficient tools, especially at the consumer level for setting up and working with taxonomies. The ultimate failure is the fact that taxonomies (and related directories) are not 100% automated. Software agent technology is an approach that can be used to mediate and facilitate interactions between consumers and taxonomies. Consumers need the complexity of knowledge embodied in taxonomies, but are ill-equipped to work with taxonomies directly. That consumers need taxonomies is proved by the popularity of tagging.
Given the difficulties encountered when human being are assigned the tasks of building directories and taxonomies, it makes much more sense to hand the tasks off to intelligent software, in particular software agent technology, which can constantly monitor the knowledgesphere and contribute to taxonomies and directories as new knowledge becomes available. These auto-directory and auto-taxonomy capabilities can add some very necessary structure to the global knowledgesphere and dramatically simply the tasks of knowledge workers and more fully empower knowledge consumers.
The MyLifeBits Lifetime Store is a research project spearheaded by Gordon Bell at Microsoft that endeavors to store everything about your life. Although focused on media artifacts, it does offer an interesting adjunct to the activities of software agents operating on the behalf of the consumer. And it does address the issue of storing photos, video, and other consumer media.
Knowledge is no static and evolves over time. There are two tasks here: 1) keeping up with the evolution of knowledge, and 2) participating in the evolution of knowledge. Software agent technology can enable and assist with both.
Consumers themselves can and should be participating in the evolution of knowledge. Software agent technology can both enable and assist the consumer as they evolve knowledge, but software agents can also directly evolve knowledge even without consumer direction.
Knowledge will be generated and modified constantly. Distributing it is a major challenge. One possibility is the concept of a web feed such as is commonly associated with web logging (or weblogging or blogging). Also known as RSS and RSS feeds, but not limited to that specific feed format, web feeds can be used to distribute any type of information, as well as knowledge itself. Specific formats would need to be developed to deeply support knowledge feeds. One problem with the current technology implementation is that the user software must explicitly go through the effort of explicitly reading the web feeds of interest, which is fine for a small number of feeds, but clearly unsuitable when the number of knowledge sources rises into the thousands and even millions. Fortunately, there is no shortage of potential solutions to this issue.
The primary intent here is for a mechanism for communication of knowledge between software agents, but there is also significant potential for communication of knowledge to consumers, as well as an input channel to enable consumers to communicate knowledge to their agents.
In addition to the use of web feeds for communicating with consumers, they are also an excellent communication model for software agents themselves. In general, the primary output of any software agent might be a web feed or knowledge feed which represents the results of the efforts of that software agent. The use of software agent feeds could dramatically raise the interaction power of the Consumer/Agent Web.
Although software agents operate autonomously most of the time, there is occasionally a need for communication with the consumer. At those times, a social user interface (SUI) is highly desirable, allowing the consumer and agent to communicate in a mode that is convenient, efficient, effective, friendly, and non-intimidating. A SUI would include elements of natural language, speech, gestures, and facial expressions, among other techniques. This remains a research topic.
Needless to say, technology to support large numbers of interacting consumers needs to be scalable. More than simply the capacity to handle the volume and traffic, the concepts supported by the technology needs to be capable of transcending scale. Software agents, acting on behalf of their respective consumers offer capabilities to operate on very large scales. Consumers need scalable categories for concepts so that they can interact with other consumers who might be working at a different but relevant level of conceptual categorization
Because of the intensity of trust required on the part of consumers to put their faith in software agents, and a desire to foster and stimulate a robust, vibrant, and innovative community, it would be wise for software agent technology to be as transparent as possible, suggesting that open source software be the rule, although there may be exceptions.
As important as it may be for the code of software agent technology to be open source, it is far more important that the data formats used by software agents be open. By adhering to an open data approach, we can greatly facilitate interoperability and network effects.
There are really two distinct elements of open data:
Vast amounts of information are available online on entertainment opportunities. Software agent technology, through its knowledge of the consumer's interests, can mediate and facilitate the exploitation of entertainment opportunities. In some cases, interaction with other consumers can provide additional entertainment opportunities. Software agents can alert consumers to opportunities that they were unaware of or never even imagined.
Software agent technology can also be used to implement online entertainment capabilities.
Traditional computer software applications have either been bundled with hardware or a service, licensed for a fee, or subsidized with advertising. This presents a challenge since the bulk of software agent technology runs autonomously and has no user interface to support advertising. Software agent technology also tends to be very fragmented, distributed over many computer systems, and access resources across networks, further complicating any attempts to erect "toll booths". Finally, the economic value to the consumer will vary widely, so there is no clear method for assessing consumers for "costs" relative to the value that is delivered.
Deployment of software agent technology on a massive scale would clearly place significant load on existing network and computer system infrastructure. That cost must be shouldered somewhere, by somebody.
One technical issue is that the amount of resource usage needed to satisfy a consumer request will tend to be non-obvious, so simply presenting a bill after the fact could potentially be so shocking as to be a complete non-starter.
If Microsoft Bob had not become a reality and such a commercial flop, people would still be seriously talking about the need for and potential benefits from a Bob-like application with a "social interface". Clearly, Bob had its faults, but maybe not so clearly Bob also embodied quite a number of valid concepts. We've thrown the baby out with the bath water, but maybe we can recover enough fragments of Bob's DNA to do a thorough analysis of the good and bad and ugliness of Bob so that we can develop a set of principle for going forward.
Yes, Bob was a commercial disaster, but we can do better, much better.
TBD: detail the lessons from Microsoft Bob
TBD: modest proposal for "Next Generation Bob"
Ray Kurzweil is certainly a very bright guy, even there is no reliable metric for judging prognostications about the future. The vision in his new book "The Singularity Is Near : When Humans Transcend Biology" is not incompatible with my thoughts expressed here. Yes, he has a much loftier vision of melding the human brain with artificial intelligence, robotics, nanotechnology, and genetic technology, but none of that would preclude anything I'm suggesting here. His idea of "near" is forty years, and I'm merely hypothesizing about more mundane objectives within the next two to five to ten or maybe twenty years.
However much of a technology advance is required to achieve Kurzweil's Singularity, I would hypothesize that the use of software agent technology for knowledge-based computing as envisioned in this paper may be less than 1/10,000th of 1% of what Kurzweil's vision would require. The bottom line is that if if Kurzweil's vision is wrong or delayed, the vision espoused here is still quite practical.
Although there are many features of modern software which exhibit agent-like characteristics, the sense of agency tends to be constrained by the general form of computing model that is being utilized:
Email is great since it enables asynchronous communications, but it adds negligible intelligence to the communications.
Chat rooms can be fun and offer a social atmosphere, but again offer negligible intelligence to the mix.
Auction systems such as eBay enable a new twist on ancient haggling, but again offer negligible intelligence to the mix.
Shopping "bots" begin to add a little intelligence, but not much.
In all cases the best we're looking at is large databases, distributed computation, and rapid exchange of information. Those capabilities are great, but the sense of agency and intelligence is still missing.
We have technology for users to collaborate, but they are little better than traditional email and telephone exchanges.
We have technology to distribute raw computing power, but little in the way of distributing knowledge and intelligence.
Traditional computer programs are great for automating discrete tasks or sequences of procedural steps. The real promise of software agents is to move a step higher and automate the pursuit of goals, where the idea is known, but the precise path to fulfill the idea is not known in advance. The agent would have the responsibility of taking a goal and decomposing it and recomposing it as implicit tasks to be performed using resources and services available in the computing environment.
A giant leap can be made in the ability of software agents to satisfy the needs and desires of consumers once we begin to support a machine-readable form for the values that a consumer has. That will dramatically simplify the consumer's task of expressing goals.
We can gain yet another giant leap in leverage for the consumer by empowering them to express their ideals as well.
An even greater leverage for the consumer will come once we have mechanisms for consumers to express their life goals.
Knowledge of a consumer's goals, values, ideals, and life goals will enable software agents to have a significant level of insight into how a consumer's needs and desires can be optimally satisfied.
The long-term goal is that the computing infrastructure will vanish into transparent ubiquity, meaning that computer hardware and software will be everywhere and operating automatically so that users don't even notice its existence, but that's for the long term. In the interim, the goal is to make computing increasingly more ubiquitous and increasingly more transparent. Software agent technology is a key component of this vision, enabling software to operate on the user's behalf without needing to be visible to the user.
As we progress towards transparent ubiquity, the user interface begins to vanish as a computing artifact and begins to blend in with the objects around us. So, we begin to converge towards the ultimate user interface: life itself. By interacting with objects around us we give the underlying software input. That natural input coupled with the vast knowledgebase transparently and implicitly available to our software agents provides the vast bulk of the information needed for software agents to pursue our goals, values, ideals, and life goals.
As we make progress on causing the computing infrastructure to vanish into transparent ubiquity, users will be able to observe that computing functions will begin to retreat into the background. Initially the user will still know that the computing functions are still there, but over time that knowledge will begin to fall away from the user's consciousness.
With advances in GPS and wireless networks, computer software within handheld devices can now tailor their behavior to the specific geographic location.
As computing devices become smaller and cheaper and easier to connect, they will become pervasive and embedded in virtually everything around us. This is known as ubiquitous computing. Once the hardware infrastructure for ubiquitous computing is in place, software agent technology can be distributed on that infrastructure and begin to offer services in support of users in such environments. This is known as ambient intelligence, intelligence that is everywhere around us without the need to communicate with computers using old-fashioned user interfaces.
Software agents running in such an environment can tap into both the users in the environment and the knowledgebases for those users, to the extent that each user enables such access.
Although software agents tend not to have user interfaces and operate "under the covers", users still need to have some conception of how they view the system and what it is doing on their behalf. Even when we finally do get to the point of transparent ubiquity, the user will still have some conception that objects around them are behaving in somewhat predictable ways. So, even as we seek to further reduce the plethora of conscious computing artifacts, we need to be cognizant of the fact that users are always going to need a user model of software agents. Maybe it is as simple as "my agents" or "the system" or "the Internet" (or "the Agent Net").
Part of the user model will relate to the process by which agents learn about the consumer's interests. Part will relate to how instantly the agents accomplish the consumer's goals. If software agents acting on the behalf of a consumer are taking an extended period of time to accomplish a goal, then consumer will need to be aware that the software agents are "working on it".
Ultimately, it may simply be old-fashioned human folklore that determines the nature of the user model for software agents, but it would be wise to seed the consumer consciousness with some useful facts.
All too often, someone comes up with an interesting innovation, but the implementation is far too primitive and toy-like to be very deeply satisfying for a broad range of users. The implicit power of software agent technology makes it too easy to produce tools that are by definition too powerful to simply be toy-like. The issue is not that a tool might appear to be toy-like, but that it actually be too shallow and limited to be very useful.
Subject to the admonition to avoid toy-like tools, there is much merit to tools that are as friendly and easy to use as toys and games which engage the user's desire to have fun while pursuing interests. Activities with significant elements of play to stimulate the user's interest, motivation, and mental processes are to be highly valued.
Sure, we cold come up with quite a long list of potential consumer applications of software agent technology, but the real point is that quite literally every known and conceivable aspect of consumer behavior is a potential target for application of software agent technology, and then some.
One of the potentially more fruitful avenues of pursuit is to use software agent technology to automate autonomic tasks and goals, things that consumer want and need done on their behalf but don't want to have to consciously consider every moment of every day.
The main point I would make here is that the more interesting consumer agents of software agent technology are those in which each agent is taking advantage of a rich, deep knowledgebase of information about the consumer's background, beliefs, desires, and intentions, as well as generic knowledge models for consumers in general and various subclasses of consumers. Each software agent that comes along and interacts with the consumer will be able to tap into this knowledge and add to it as well, subject to privacy constraints that are ultimately controlled by the consumer themselves.
Obviously we need robust storage and access control for such knowledgebases so that consumers can feel comfortable that their personal information is both kept confidential and is not at risk of being lost. We need much better storage systems than are presently available for even the most security-conscious organizations.
At this stage it would be pure speculation to visualize what future consumer-oriented knowledge-based applications will turn out to be killer apps that help consumer-oriented knowledge-based computing really take off. Markets evolve, so the profile of future consumers and their interests has yet to evolve. Besides, the focus of this vision is the platform rather than specific applications. Nonetheless, it is important to contemplate the characteristics that such applications might have since they, rather than the platform nature of this vision, will be what actually draw in real, live consumers.
Software agent technology is more appropriate for facilitating direct consumer-to-consumer (C2C) applications such as barter of goods and services. More than simply directly matching consumers, agents can greatly assist in integrating long lines of chains of demand that may be needed to successfully complete barter transactions where the two originating consumers don't have a direct matching interest. And all of this without complex, centralized servers.
There is nothing terribly new about geographic information systems (GIS), but lately more mapping capabilities have been made available on the Web, including Google. There have even been rudimentary efforts to add some consumer-oriented application features, but to-date the efforts remain quite primitive. What is needed is a much richer infrastructure that is capable of supporting very rich consumer GIS (Geographic Information System) applications. The basic capabilities may seem obvious, but without a rich infrastructure, building of rich applications remains tedious, error-prone, beyond the skills of the average developer or consumer, and frequently outright impossible. Once again, software agent technology can facilitate the development and deployment of rich consumer applications, such as those that integrate the knowledge and interests of multiple consumers.
The heart of efforts to support consumers should be an architecture of participation (a term used by Tim O'Reilly) which empowers consumers to interact and collaborate and organically build their own sense of community. The consumer-centric knowledge web is too complex to be built purely by centralized effort, so it depends on the unlimited growth potential inherent in an architecture of participation.
Many existing agent-like applications for consumers require a centralized server to facilitate interactions among consumers (e.g., auctions in eBay). In contrast, the real power of software agent technology is to enable consumer-to-consumer interactions which enable consumers to directly interact without the need of server-based centralized authorities. In essence, this is a form of Peer-to-Peer (P2P) computing, the difference being that decentralized software agents operate as intermediaries between consumers, under the control and authority of the consumers themselves.
The core concept is the consumers can interact directly, actually indirectly through the software agents that they the consumers control and authorize, rather than requiring some vendor or third-party intermediary who controls the interactions.
In any case, consumer-to-consumer electronic commerce is clearly a fertile domain for application of software agent technology.
Social networking has gained a fair amount of popularity, but simply hasn't gained the traction to be a general consumer phenomenon. Current social networking tools and applications and web sites appeal to certain types of personalities (e.g., the elite, the pundits, the leading edge, the lunatic fringe), but not to the average consumer's sense of community and socializing.
Consumer networking and social networking merely set the stage for a more powerful category: consumer collaboration, where consumers are not simply communicating, but actually engaging in projects together. Software agent technology can both facilitate such projects, but also instigate and initiate them based on the knowledge and interests of the consumers that is available to their software agents.
Software agents can both sift through the vast amounts of networked knowledge to find information of interest to the consumer, and can actually reach out and make contact with the software agents of other consumers who might have a common interest.
Software agents can simultaneously pursue the interests of the consumer, while protecting the privacy of all consumers. By protecting consumer privacy, consumers can feel more confident in giving their software agents freer reign and a wider reach.
The concept of active software agents enables a consumer's software agents to constantly be seeking out and possibly even pursuing collaboration opportunities that the consumer may not yet be consciously aware of. The potential for such implicit collaboration boggles the mind.
Yes, the consumer still maintains control over the extent to which such opportunities might be pursued, but the consumer is freed from needing to do all the dog work to uncover the opportunities.
Analogous to the concept of business intelligence (BI), which aims to work with knowledge about the processes within a business, consumer intelligence (CI) aims to allow the consumer to work with knowledge about their own lives.
One of the most obvious and richest applications of software agent technology is to have software agents which have been programmed with knowledge about your career and life plans and can offer guidance along the way. The life mentors can offer advice and assistance with the many forms of planning that occur in our lives, including nutrition, health, education, housing, financial affairs, career, family, etc.
The concepts of life mentor and life agent are closely related, but the key difference is that a life mentor is more of an assistant that gives you feedback and suggestions and advice, but life agents can also simply do useful things for you that you may not even know or care about.
Put a different way, a life mentor would address tough, growth-oriented conscious decisions, whereas a life agent can also address subconscious details of the consumer's life.
Software agent technology enables a richer and deeper semantic modeling for the learning process which can provide a more robust level of support for consumers as they transition through the many stages of learning throughout their lives. Lifelong learning will become a concept recognized by software agent applications rather than a concept that is exterior to the world of computer software.
The role of the Consumer/Agent Web in traditional education is an open question. Certainly software agent technology can be of great assistance, but traditional education is such an emotionally and politically-charged area, that much more careful thought is needed.
If given the opportunity, software agents can assist individuals in learning by keying off the students existing knowledgebase, especially when coaching and mentoring might be needed.
Software agents could greatly facilitate cooperation, collaboration, and project-oriented work by groups of students.
Software agent technology can be used to generally assist in empowering consumers, helping them to identify opportunities for pursuing their interests.
One important way that software agents can assist consumers is to facilitate leadership. Rather than being merely passive consumers or even pursuing a modest degree of activity, consumers can be empowered to take on leadership roles. Software agents can help to identify opportunities for leadership and facilitate consumers being able to exploit such opportunities. Software agents can assist consumers in gaining access to the knowledge needed to pursue leadership opportunities.
By its very open-ended nature, software agent technology is inherently oriented towards supporting creativity. By comprehending the consumer's interests and having access to the vast networked knowledgebases, including those of other consumers, software agents are uniquely positioned to offer support and suggestions for the consumer's creative pursuits.
Beyond support for creativity, software agent technology with knowledge of the consumer's interests and behavior can support that special portion of the consumer's mind known as their imagination, the source and driver for their creativity.
By providing support for organizing ideas, thoughts, and images, software agents can become an adjunct to the consumer's own imagination.
Going beyond mere organization, software agents can take a more active role and retrieve information from knowledgebases, interact with the agents of other consumers, and even facilitate the direct interaction of consumers, when appropriate and enabled by the consumers themselves, to enable them to enhance each other's imagination.
Everybody has dreams (the conscious kind) and aspirations, but pursuing them and achieving them is another matter. Software agent technology can help. First, consumer applications are needed to assist the consumer with expressing their thought about their dreams, hopes, and aspirations. With that knowledge in the consumer's personal knowledgebase, software agents can then seek out global knowledge and interact with software agents representing other consumers and even mentors to exploit knowledge that can be shared. Of course the consumer's privacy will be completely respected, but the software agents working on the consumer's behalf can alert the consumer to resources and contacts that can help the consumer pursue their dreams and aspirations. Software agents can also help if the consumer is unsure of their dreams and aspirations and seeks information, advice, coaching, and mentoring. Not that the software agents can necessarily act in that capacity themselves, but the global knowledgebase and global web of software agents representing other consumers is a vast resource that can be tapped. The precise modalities of support for dreaming and aspiring are far from clear, but what is clear is that it is an area which is deserving of significant research.
Natural language interfaces are notoriously tricky and extremely dependent on domain, context, and the users. The knowledgebases maintained by software agents contain a wealth of domain, context, and user knowledge which has the potential to provide a rich enough level of guidance to natural language interface software so that realistic natural language interfaces become much more practical.
By maintaining as much knowledge as possible in a language-neutral semantic format, consumers will be able to access a vast amount of knowledge that would not otherwise be easily accessible if it were stored as raw natural language text. Software agent technology can be used to facilitate the origination, translation, and management of knowledge in both semantic and natural language formats.
By enabling consumers to communicate in higher-level semantics, consumers which read and write and speak dissimilar natural languages will in fact be able to communicate, at least to some degree.
As a general rule, the relationship between consumers and vendors is most fruitful with an opt-in approach to communication and commitments. Consumers will benefit greatly by knowing that they are always being treated fairly by vendors.
Software agent technology does provide an interesting twist since the consumer has the ability to delegate some degree of opt-in authority to their own software agents. But, the key is that the consumer has that control and would need to opt-in to delegate any of that control. The consumer will also have the authority to rescind any of that delegated authority at any time and for any reason.
There are plenty of consumer applications that could benefit greatly from the use of software agent technology, but we need to focus first on who is more likely to use these new technologies and applications and trust that interest will then gradually filter out into the broader demographic base (e.g., your average "dumb user"), and it would appear that both ends of the demographic spectrum would be more likely to quickly adopt the new technology and applications than the middle of the demographic curve. The high-end demographic is likely to be professionals who keenly sense high value from a focused use of the technology. The lower-end demographic is likely to be kids (say 15 to 25 years of age) who find the new possibilities of the technology and applications to be exciting, cool, challenging, and a great way to rebel against the odd, stodgy, entrenched traditional applications.
Professionals will appreciate the ways that software agent technology can adapt and be adapted to suit their specific needs, while kids will appreciate the creativity that software agent technology offers them.
Given the severely limited graphical user interface of handheld devices, including mobile phones, software agents would seem like a natural technology for assisting users of such devices.
Traditional user interfaces and even high-end graphical user interfaces are more procedure and task-oriented, so anything that shifts the balance towards the goal-oriented end of the spectrum has the potential of dramatically lightening the user interface burden for handheld devices.
Many common goals could be pre-programmed into the handheld or server software. Then, in conjunction with a knowledgebase about the user and context of the physical handheld device (e.g., physical location and accessible local devices), a far richer level of defaults can be made available to the user.
Mobile environments such as cell-phones, handheld-devices, and motor vehicles present a whole new level of application considerations that were not an issue for fixed computers. Once again, the added complexity is a great match for software agent technology. Software agents can be readily applied to every consideration that arises in mobile environments.
There are three forms of mobile agent applications: 1) applications than run in mobile, handheld devices, 2) applications that run on servers in support of mobile devices, and 3) applications composed of migratory software agents that are able move or be moved between computer systems, including mobile and handheld devices. In all three cases, software agents can perform significant functions on behalf of the consumer, with a higher degree of robustness, scalability, flexibility, and user-friendliness.
The important common feature is that there will no longer be a one-to-one correspondence between a hardware device and the software that runs on it. Hardware will be distributed (e.g., mobile devices and accessible servers and ambient computing hardware) as will software (modular components and software agents), and the two will be combined in a dynamic manner as mobile devices move around.
As personal computers and other personal electronic devices begin to take on a larger and central role in the lives of consumers, the management of the consumer's data becomes a larger and larger problem. This is yet another opportunity for software agent technology. Software agents can transparently assure that data is stored in a secure location and is readily available when it is needed and where it is needed.
Local storage in an electronic device such as a personal computer is convenient, but has some drawbacks. People struggle continuously with the issue of who to best "back up" their data, not to mention where to store backups and then how to access them. People also struggle with how to recover from mistakes and mangled data and recover from such problems. The distributed virtual personal computer (DVPC) concept is designed to avoid all of these problems. First, the local storage is only a cache or copy of the "real" data, which would be stored on multiple, network-accessible storage systems (not simply one central server). Second, "smart versioning" will allow the user to navigate through all changes in the history of a file so that no data is ever lost. DVPC would automatically propagate changes to the consumer's data to all computers which have been designated to be part of the consumer's virtual personal computer.
DVPC would also enable the consumer to selectively make data sharable by other consumers and software agents. DVPC would be an ideal repository for a consumer's software agents to store and access data that belongs to the consumer.
At present, DVPC is only a concept, with no plans in place for its implementation.
The concept of virtual networked bits addresses the issue of having a robust method for storing user data that does not rely on the reliability of a local or master copy of your data or even manually storing copies elsewhere. The intent here would be that all consumer knowledge would by definition be stored as virtual networked bits so that consumers never need to worry about lose of their information and knowledge.
Sharing of consumer data, information, and knowledge is quite problematic with today's computers and networks. Specialized services continue to spring up like weeds to facilitate selective sharing of data, including media such as photos, audio (e.g., podcasts), and videos, but the extent of the underlying problems is amply illustrated by the never-ending emergence of new services. On the other end of the spectrum, there seems to be a never-ending stream of horror stories relating to identity theft, hacking, viruses, etc. demonstrating that keeping information private is as problematic as sharing it. A core issue is that it is at present too difficult for consumers to simply manage their information at all. This suggests the need for the knowledge-based software agent technology that can assist the consumer in managing their information, including the decisions about which information should be kept private, which information should be available to the world, and which information should be available to selected groups of consumers. Software agents can then assist in the dissemination of information to those to whom access is granted.
All of the problems with consumers managing their information point in the direction of a need for a radically different form on file system, a consumer-centric file system, one that may bear no resemblance to the computer file systems of today. More than just a system for organizing computer files, we really need a consumer-centric knowledge organizer, one that comes with an army of automated librarians, implemented using software agent technology, to automatically collect, organize, disseminate, and access the wide range of knowledge that confronts consumers throughout their lives.
Management of medical records remains an unsolved problem. Software agents in conjunction with distributed management of consumer data present an opportunity to both manage medical records better and to give the consumer more control.
Existing, proprietary approaches to automating and managing medical records simply don't have the critical mass to achieve success, and don't even come close to letting the consumer participate in the process.
Much research is needed into how computers and computer networks can be exploited to aid consumers in their health, nutritional, and medical needs. Software agent technology can mediate and facilitate consumer access to information and services. And in some cases, software agents can directly provide services, such as nutritional monitoring. Software agents can also mediate and facilitate interaction with other consumers, such as sharing experiences and support groups.
Although it's too big a leap to suggest that software agents might offer legal advice and eliminate the need for lawyers, there is still a lot of information about a consumer that can be managed more effectively by software agents. Software agents can also monitor the consumer's activity and advise them if there are any situations that might suggest a need for legal advise. This would all be under the control of the consumer. There would be no Big Brother watching over them. Software agents can also keep track of information about consumer transactions and interactions which might be of value in any future consultations with lawyers. And finally, software agents can be used to keep track of past legal proceedings and discussions for future use. Software agents can also track the consumer's current legal situation and make discrete inquiries of other consumers about their experiences in similar scenarios. Since personal details are kept completely private, consumers can effectively have safe conversations about sensitive legal matters with other consumers, knowing that their personal details are explicitly kept out of the discussions by the mediation of software agents.
Everybody encounters uncertainty in their lives on a frequent basis. Coping with that uncertainty is an ongoing problem and even paralyzing for some people. Software agent technology can offer consumers assistance with uncertainty, helping them organize their thoughts and consider options and choices. Agents can access common knowledgbases for information relating to decisions where uncertainty is an issue. Agents can make inquiries as to how other consumers with similar profiles have handled similar uncertainty. Agents can hook up the consumer with others trying to cope with the same or similar uncertainty. Finally, software agents can arrange for human mentoring related to the uncertainty.
In any case, keeping a detailed profile of the consumer enables the software agent to have a much more "intelligent" starting point for assisting the user.
Every person plays a number of roles in their life and may also have any number of personae that they express and are known by others. Software agent technology can facilitate the complex and confusing information, knowledge, and interactions that come with playing multiple roles and having multiple personae.
Computer-aided instruction (CAI) has been around for many years (decades), including the current popularity of eLearning, but much of this so-called "learning" is really training. Learning is a much more difficult proposition. In particular, we have the problem of learning how to learn. Once again, software agent technology can be applied. The goal here is not to train the consumer a bundle of pre-programmed knowledge, but to give them tools and support that empower them to actually learn on their own, especially in new and unexpected environments. Agents can help by having access to the consumer's profile and history, consulting generic knowedgebases, searching for other users who have had to cope with similar learning situations, and possibly even invoking the aid of a human mentor.
Even the most powerful search engines today are still fairly primitive. Much research is needed to advance the state of the art.
Auto-search means that software agents are continuously monitoring the consumer's interests and activities and automatically initiating search queries to collect information and then organize it in ways that align with the consumer's interests and activities. The goal is simply to give the consumer the knowledge they need, when they need it.
A variety of intelligent search alerts and notification schemes are available today, but in rather primitive forms. In truth, they simply don't work very well even when the consumer takes the trouble to learn how to use the tools. Software agents can be deployed to handle all of the bookkeeping, in accordance with auto-search to provide useful and user-friendly alerts and notifications.
Today's search engines focus primarily of searching based on simple keywords, but are clueless about the meaning of those keywords. Knowledge-based software agent technology can exploit the consumer's knowledgebase and context to do a true semantic search based on meaning rather than textual keyword matching.
Although Google and other search engines do have the concept of a search alert, it's rather simple-minded. Going far beyond a simple keyword orientation, software agent technology can support goal-oriented auto-search, which attempts to determine whether newly available information aids in meeting the goals of a consumer rather than merely matching some keywords. So, instead of going to Google to explicitly get information, the consumer can simply sit back as "my agents are on it."
The difficulty with existing, and even proposed search engine capabilities is that it's still a simple search and depends on the consumer to initiate and pursue the refinement process. Instead, we need sleuthing, where the consumer simply supplies a few clues and intelligent software agents do the heavy-lifting of sleuthing for answers, including reasoning based on real semantics of both the query and the data. Part of this will depend on sophisticated semantic webs, ontologies, and taxonomies, part depends on histories of similar searches (or sleuths), part depends on interacting with the software agents of other consumers. It is a hard problem, and worthy of significant research, but would be well worth the effort.
With all the talk about search engines, personalization, tracking, histories,
etc., there is a little too much focus on trying to give the user results that
their past history suggests that they would want. Maybe it's just me, but I have
a different interest than merely wanting to see stuff similar or related to what
I've seen in the past or what people similar to me are interested in. I'm always
searching for new stuff, so what I would most like the computer to do is to
"Give Me What I Might Want" or GMWIMW.
This is actually the opposite of using my past history to predict what I might be interested in. Rather than take my history and moving delta to similar topics that correlate well with my past interests (or even new results of people similar to me), I want to make a quantum leap in some unexpected direction and get results that will likely have the lowest possible correlation with my past interests (or the results selected by people similar to me).
This is what I want the computer to do. Whether this is feasible, is another matter.
Actually, I do know for sure one technique that at least offers the possibility of showing me results that I might want: randomly select an item of information that I've never seen before. Now of course that will frequently (usually) give me all sorts of uninteresting stuff that I have absolutely no interest in. That's okay. Just give me a little button so that I can signal topics that should be semi-permanently crossed off my potential interest list. I say semi-permanently, because even then, the computer might periodically query me as to whether some of those topics should really stay on my "do not show" list. It could do this by displaying closely related results (to the results I've expressed an extreme disinterest in) on the off chance that there was simply some superficial detail that discouraged me. In any case, after a short while, the computer would have quite an impressive library of topics and sub-topics that can be weeded out of even a random GMWIMW process.
I'm not suggesting that GMWIMW should be a random process, but at least there is some hope that GMWIMW could conceivably be implemented.
To me, this is a "growth-oriented" search strategy. One that seeks new paths. One that seeks new horizons. One that seeks enlightenment. One that seeks inspiration. One that seeks innovation. One that almost makes the computer seem to have something like intuition.
On the other hand, I don't presume for one moment that my interests in GMWIMW coincide with those of the average search user.
Still, almost everyone has moments when all the traditional, methodical, and even heuristic strategies and techniques for making incremental forward progress are not getting you anywhere. Those are precisely the times when GMWIMW is the optimal search strategy.
People are instructed to think outside the box, but that's much easier said than done. Software agent technology can help in the sense that a rich context of software agents around the consumer can provide a clear indication of where the box really is, and then the agents can offer the discipline to seek out knowledge and opportunities that really are outside of the consumer's current "box". Software agents can offer the appropriate support for the consumer, whether to hold their hand through the process or to give them a not-so-gentle push to get out of the box. A very wide range of customizable support can be offered.
Software agent technology can also offer consumers "out of the blue" experiences when they wish to "get out of the rut". The rich knowledge context for the consumer, coupled with the ability to exchange information with the software agents for other consumers as well as the knowledgebase of global experiences enables software agents to suggest and even pursue experiences that can be "out of the blue" for the consumer.
Peer-to-peer (P2P) networking, as popularized by P2P file sharing is a useful computing metaphor, but is made far-more powerful when it is intelligent agents that are communicating and exchanging information. Agent-to-agent (A2A) interaction is a very powerful computing metaphor and dramatically reduces the level of consumer interaction required to achieve a consumer's goals.
The agent-to-agent metaphor requires a much more sophisticated level of infrastructure support, but is also capable of delivering a much higher level of intelligent support for both the interaction of consumers and the pursuit of consumer goals.
Consumer-oriented robots are a great opportunity for introducing software agent technology to the consumer market. To date, low-end robots have been quite primitive and hardly better than toys, but the potential is certainly there.
Mass customization is a business strategy that aims at producing goods and services for the needs of individual consumers, while achieving economics of scale in operations. Personalization is but one aspect of this customization. Software agent technology is the best-positioned technology to pursue both personalization and customization of services to meet the needs, goals, and desires for producers, distributors, and consumers of services.
Blogging is a fairly recent phenomenon, but shows a lot of promise for interaction among consumers. Unfortunately, blogging is a bit too tedious and uncomfortable for many people. Once again software agent technology can come to the rescue. Software agents can be pre-programmed with a deep enough knowledge of the blogosphere and the consumers knowledge base to greatly facilitate the consumers experience with blogging.
Blogs are a fairly primitive, but semi-structured form of knowledge. Software agents can help to link the information in blogs back to the more structured consumer knowledge base.
Many blogging events are in fact fairly predictable and driven by the nature of the consumer's behavior patterns. Rather than the consumer needing to manually take the step to create a new blog post, software agent technology can be applied to automatically perform blog posts on behalf of the consumer. Such auto-blogging can dramatically simplify the consumer's online life. In some cases the consumer may wish to have full control, but other times it may be simpler, more convenient, and more comfortable for the consumer to put the auto-blogger agents on auto-pilot. In any case, the consumer is always in control.
Mobile-phone applications are an excellent area for the use of software agent technology. Given that the consumer has a limited user interface and attention span, it makes perfect sense to have network-based software agents which are off pursuing goals for the consumer, especially while the consumer is not connected.
Consumers have great difficulty being precise and specific in expressing their needs. Traditional computer software has worked well to the extent that users provide precise input. Fuzzy logic is a concept from philosophy and artificial intelligence that explicitly addresses the inherent difficulties of insisting on precise specifications. Software agents have a great opportunity here to introduce the concept of fizzy logic into the mainstream so that consumers can focus on expressing what they know, regardless of how imprecise their knowledge may be. Many applications can work best when organized as journeys of discovery rather than starting with a presumption of a single, direct path.
Put simply: if a piece of computer software does not support fuzzy logic, then it's not likely to be an intelligent software agent.
In traditional software each application needs to explicitly access any information which may have changed. An alternative is what is called constraint management, which allows applications that use information to declare their needs and then an intelligent infrastructure registers those needs so that the application will be automatically alerted when any of the needed information changes.
It can be very tedious and error-prone for applications to keep up with changing information. And that's for information sources that are known in advance in detail to the application developers. Constraint management can automate that process.
In addition, an application can register its interests in whole classes of information so that new streams of information can be readily accessed as they come into existence. Constraint management can empower application developers to focus on the functions they wish to perform, while the infrastructure takes care of managing information streams and automatically invokes application functions as declared by the developer.
As the interacting communities of software agents become larger in size and the interactions more complex and competitive, we will need to consider the psychological aspects of agent interactions. Software agents will need strategies for coping with complex social interactions, and will need to consider the social aspects of interacting with consumers themselves. And, software agents will need to consider the psychological impacts of their actions on the consumers for whom interacting agents are acting. Lots of fertile research ground here.
Knowledge and knowledge flows are just as susceptible to spam as is traditional email. Software agents can of course mediate and reduce the flow of knowledge spam. In addition to outright spam (e.g., unwanted commercial messages), users can also be bombarded with legitimate knowledge that merely happens to to either outright useless to the user or irrelevant to the task and goals at hand. Software agents, with their knowledge of the needs and interests of the consumer can once again mediate to assure a useful flow of knowledge.
Deep knowledge of the consumer won't be permissible until we have a rich enough identity meta-model which will robustly prevent fraud and other mischief related to attempts my malicious parties to misrepresent their identities. On the other hand, software agents need to cope with consumers who wish to protect this anonymity.
We need a rich identity infrastructure, not as a monolithic, centralized system, but as a distributed system that protects all consumers as well as all vendors.
We need rich selective disclosure mechanisms so that applications can gain access to information needed to optimize personalization of services, but also that limits access so that privacy and anonymity are also protected.
Consumers need repositories or "banks" for their personal information, places where the information can be protected by third-parties that have no vested interest in applications that the consumer may wish to interact with. Consumers can then authorize their chosen "identity banks" to disclose only as much of their information as they want disclosed and only to those parties that they authorize. The identity bank also provides a mechanism for vendors to verify or access personal information as needed and as authorized by the consumer.
Having a rich identity mechanism is essential to this process.
The validity of a digital identity does not guarantee that this electronic identity really does match up with a specific real-world identity. Synchronizing the online digital world and the offline real world is an unsolved problem
Identity theft has certainly gotten a lot of publicity and much work has been done to mitigate it, but it remains an unsolved problem.
There really are four discrete problems: 1) Real-world identity theft within the real world, 2) Online digital identity theft within the online digital world, and 3) Misuse of a real-world identity in the online digital world, and 4) Misuse of an online digital identity in the real world. Any particular solution may address one or more of the four problems, but a successful solution to one problem does not guarantee a successful solution to the other three problems.
One approach to managing the personal information about a consumer that relates to their identity is the concept of an identity union. Previously, I've written about a related concept called a Data Union, which is essentially a "bank" where consumers can voluntarily "deposit" personal information that can then be selectively provided to vendors and other consumers with a high level of confidence on the part of all parties. The word "union" is used here in the sense of a consumer "credit union", a place where consumers feel comfortable placing and discussing their financial affairs.
So, the concept of an identity union is that the consumer can place any amount of personal information "on deposit" at one of more "identity unions" of their own choice (or subject to criteria of their own choice), and then the consumer and their agents (e.g., software agents) can grant access to selective amounts of information to vendors and other consumers as they see fit, with full confidence that nobody will be given information which they are not authorized by the consumer (or their agents) to receive.
Identity details can include real-world information about the individual, including photos, fingerprints, blood type, DNA details, etc.
An identity union would ideally have a real-world location where consumer information can be verified by people and equipment as opposed to being whatever anybody might upload on a public network.
An identity union would have a reputation, auditing procedures, training protocols, etc. so that both the consumer and authorized users of the identity union can have very high confidence in the validity of the consumer's identity.
Privacy is an ongoing struggle.
Although software agents need even more details about our personal lives, the real opportunity is that by shifting personal information into agents, we have a better chance of minimizing the amount of personal information that is needed or captured by businesses and governmental entities.
Much work is needed in this area.
Security is and will always be a problem, but more so as we broaden the scope of applications, broaden the audience of users, and add such wide-ranging infrastructure that there are an astronomical number of points of potential vulnerability. Much research is needed, but software agent technology can be of great assistance, both in monitoring and enforcing security constraints, and facilitating interactions in a way that leads to severely-narrowed opportunities for security breeches.
With visions of Big Brother from George Orwell's 1984, it will be essential to craft a computing infrastructure which minimizes the likelihood that an intrusive government would get any unnecessary access to the personal information of consumers. Decentralized computing as epitomized by autonomous software agent technology is a very appealing approach to deter Big Brother.
As much as we would like consumers to have absolute control of their lives and their data, there are legitimate law enforcement interests that may require gaining access to consumer data. How to do that in a way that doesn't give law enforcement authorities total, unfettered access is an open research question, but distributed, autonomous software agent technology coupled with robust access control mechanisms would seem to be an appropriate approach to pursue.
Terrorists will always seek to exploit technology which enables them to communicate in ways that are less-likely to be detected by law enforcement authorities. Nonetheless, it will be important to have sufficiently robust safeguard mechanisms so that terrorist activities can be detected and reported to the appropriate authorities. Software agents can at a minimum provide a robust monitoring mechanism.
Information infrastructure, both hardware and software, is a plausible target at times of war, including terrorist attacks. Therefore, it is critical that our computing infrastructure be robust enough to deter and mitigate any negative consequences of information warfare. Software agent technology can play a role, including monitoring and intervention. Further, the distributed nature of software agent technology tends to assure that applications, services, and data are less susceptible to attack, or at least that consequences are less likely to spread.
The flip side is that software agents could be utilized to engage in offensive information warfare. The good news is that the level of infrastructure needed to support advanced software agent technology will inherently make it likely that safeguard checks will detect attempted information warfare attacks.
Nonetheless, much research is needed in this area.
Autonomy is an extremely important quality for software agents, but it presents many difficulties and can be quite dangerous (like fire) unless managed properly.
I have identified a number of levels of autonomy:
Levels 6 through 8 are forms of autonomy not normally associated with agency.
Levels 4 and 5 are primitive forms of autonomy associated with agency.
Levels 1, 2, and 3 are the general target for the application of software agent technology.
Level 0 in fact may have the highest potential value, but is also the riskiest and most difficult to achieve.
Consumer-oriented software agents will need significant awareness of the social fabric of which the consumer is a part, including:
Knowledge of the consumers relationships can dramatically enhance a software agent's ability to support the consumer
Existing knowledge management tools are oriented towards professionals, rather than the needs of consumers. A significant level of skill, aptitude, training, and patience is needed to engineer knowledge in existing systems. Even then, the encoded knowledge is not up to the level of depth envisioned here. Beyond all of that, one key distinction is that consumers are not working on behalf of some organization which dictates a framework, but have their own open-ended interests at heart. Tools for the consumer-centric knowledge web must be consumer-centric and recognize that the user of the tools is the focus of the knowledge to be managed. The tools need to take into account the fact that the consumer lacks a feel for the underlying difficulties of knowledge management. More importantly, the tools need to be built based on the understanding that the user, the consumer is not merely managing knowledge, but in fact is frequently creating new knowledge, that may not even fit into any existing structure. Lots of research needed here.
The consumer-centric model espoused in this paper dictates that consumers never are required to accede to the demands of any entity that the consumer "trust us." Rather, the consumer and their software agents will always be in a position to say no to requests for trust and always be free to take steps to validate the trustworthiness of any entity before agreeing to interact with that entity. Key to ensuring that no consumer is ever placed in a position where trust is forced, the knowledge infrastructure of the consumer-centric knowledge web must be distributed in such a way that no vendors are in a position to acts as "trust us" gatekeepers.
A lot of the thinking about software agent interfaces has focused on trying to make the interface human-like, such as synthetic characters. Although this approach makes sense in a lot of cases, the primary focus should be on eliminating the human-agent interface entirely and using an implicit interface or an inferred interface, where the software agents are interfacing with the knowledgebase of the consumer rather than the consumer themselves.
Even in cases where a software agent does need to communicate directly with a consumer, the interface should be one that makes sense and works effectively, regardless of whether it is human-like or not. For example, you might engage a software travel agent in an email conversation (much like the one I had with my real travel agent to weeks ago).
I am not arguing that consumers should be confronted with computer-like interfaces at all times, but simply that we should constantly be looking for interfaces that transcend both traditional computer and human interfaces, where it makes sense.
The really important concept is that software agents communicate in a rich but abstract messaging format that can be translated by a user interface layer into the preferred form of communication for the individual consumer.
The legal mine field of software patents is immensely significant to the emerging field of software agent technology.
Sometimes, patents are used to attempt to control a sub-sector of the economy and to preclude new entrants.
Other times, the economic power of patents can act as an economic incentive to spur innovation and investment in an area.
One of the keys is to seek to evolve the relevant markets in such a way that patents tend to apply to infrastructure vendors who can readily afford to license patents, but that application developers can freely innovate and develop applications without the burden of worrying about patent licensing or potential infringement. Essentially, we need to have open, "free enterprise" zones with regard to intellectual property so that innovation and business development can occur at a healthy and rapid pace.
If a computational entity such as a software agent truly is given a sense of agency related a legal entity, such as a person or real-world organization, then in theory that software agent would become an entity of interest to the law, governments, and the courts.
There will be many different possibilities for specific architectures for consumer applications that use software agent technology, but here are some of the elements that are likely to be of high value:
As usual with science and reality, representations of theoretical science tend to pop up in science fiction before the science becomes a reality. This has already been proven to be true with software agent technology.
All of these synthetic characters have captivated readers and viewers, but there are some problems:
Much more research is needed.
Much more lab-bench trial and error experimentation is needed.
People need to identify key tasks or goals that they desperately want and need to have automated.
Users need to be provided with preliminary software which allows them to begin to get comfortable with building up a personal knowledgebase that can be used by software agents.
Dream on! Both literally and figuratively. Given the vast amount of research and infrastructure development needed for this ambitious vision of exploiting the power of software agent technology for consumer applications, it's way to early to be thinking about a concrete "plan" for implementing the full vision.
By all means, the research agenda should be pushed as hard as possible. There's lots of dreaming to do there.
Occasionally, some dreamers will in fact attempt to implement pieces of their dreams, and on occasion they will even succeed. Over time, we will slowly creep up the side of the mountain, but rarely will any single innovation or even collection of innovations take us more than a small distance towards the summit. Only over extended periods of time will we see macro-level progress, which is the sum-total of the many efforts of many individuals and many teams.
There is no fixed plan and there cannot be. We need to be opportunistic and exploit possibilities as rapidly as we become aware of them, while simultaneously always dreaming of the next big quantum leap.
So, dream on!
TBD: A real roadmap with milestones.
Seriously, there is a lot of work to do and it cannot be done all at once in parallel. Much additional attention needs to be given to deciding which corners or niches of consumer applications of software agent technology will do the best job of getting the ball rolling.
A lot of infrastructure is needed. On the other hand, research on many of the higher-level capabilities can be performed with far less than a complete implementation of the lower-level infrastructure.
The distributed knowledge infrastructure deserves a lot of early attention. How does a user create new knowledge and put it out on the Consumer-Centric Knowledge Web? Shifting away from vendor-controlled servers is an interesting problem.
A robust implementation of the Distributed Virtual Personal Computer (DVPC) would get a lot of balls rolling.
There have been any number of conference papers, project descriptions, trade media articles, and even general media articles pontificating on the wonderful future of intelligent agents that's always "just around the corner", but somehow those corners are far more difficult to negotiate than we can ever seem to grasp. Each of these articles should have a variation of the standard passenger-side car mirror warning: "Objects are further than they appear."
I've tried to find books related to the use of software agent technology for consumer applications, but they're limited to the primitive existing applications I've listed at the beginning. There are plenty of books relating to industrial applications (see my list). Sad to say, the only books espousing an advanced vision as envisioned here are the works of fiction that I've listed.
aire (Agent-based Intelligent Reactive Environments) - An MIT CSAIL project dedicated to examining how to design pervasive computing systems and applications for people. To study this, aire designs and constructs Intelligent Environments (IEs), which are spaces augmented with basic perceptual sensing, speech recognition, and distributed agent logic. aire's IEs have encompassed a large range of form factors and sizes, from a pocket-sized computer up to networks of conference rooms. Each of these serve as individual platforms, or airespaces on which pervasive computing applications can be layered. Examples of aire applications currently under development include a meeting manager and capture application, contextual and natural language information retrieval, and a sketch interpretation system (developed by the Design Rationale Group).
Project Oxygen - MIT's pervasive computing project.
FRODO ("A Framework for Distributed Organizational Memories") - a project focused on methods and tools for building and maintaining Distributed Organizational Memories (DOMs) in a real-world enterprise environment. It is a successor project of the DFKI KnowMore and VirtualOffice projects. The technical approach is based upon an application-driven combination of techniques from: agents for workflow enactment and information access, ontology acquisition from texts and user interaction, and document analysis and understanding.
[ Home | Blog | Books | Glossary | Links | Manifesto | Search | Contact ]
Updated: May 04, 2006 02:32:28 AM -0400
Copyright © 2006 John W. Krupansky d/b/a Base Technology