[ Home | Blog | Books | Glossary | Links | Manifesto | Search | Contact ]
By Jack Krupansky
These are simply random thoughts that popped into my head and have not yet been thought out in enough detail to be integrated into more formal sections.
February 19, 2005
Three-Level Agent Interaction Negotiation and Connection
In order to maximize the flexibility and robustness of agent-to-agent negotiation and binding, I propose a three-level scheme.
Level One is the level of the agents themselves.
Level Two is the level of intermediaries that are able to work with agents as their clients. Each agent would have some number of Level Two intermediary agents with whom it has established a level of trust and with whom it is willing to work.
Level Three is the level of intermediaries for the intermediaries. This is the level at which "first contact" occurs between two agents. Each level two intermediary agent has some number of intermediary-to-intermediary agents (I2I agents) with whom it has established a level of trust and with whom it is willing to work.
Level One agents offering services would "advertise" to their Level Two intermediary agents who in turn advertise to their Level Three intermediary-to-intermediary agents who keep track of those advertised services.
Level One agents seeking services would notify their trusted Level Two intermediary agents of their interest. Those trusted Level Two intermediary agents would in turn notify their trusted Level Three intermediary-to-intermediary agents of the services that their Level One client agent seeks. Each Level Three agent would query its catalog of advertised services and proceed to competitively negotiate a "connection" (interaction contract). One or more I2I agents would become "primary contractors" and others might become "backup contractors".
Two Level One agents would never interact purely in a direct manner, but rather through their respective Level Two intermediaries. If a connection is disrupted, the Level Two intermediaries would then seek to "fix" the disruption. The fix may in fact require negotiating a new agent-to-agent connection. The Level One agents would be notified of all disruptions using an object-oriented event notification and given the opportunity to continue a fail-safe new connection or to abort the connection if appropriate. The Level One agents could be configured to blindly accept all re-negotiated connections. In other words, the developer of a Level One agent would never need to "worry" about the robustness of any connection. In fact, the whole point of the three-level arrangement is to maximize the odds of a successful connection and to maximize the odds that a connection can be renegotiated if disrupted.
Level Two intermediary agents may also seek to re-negotiate a connection based on performance. In fact, a host system might signal intermediary agents to downgrade or upgrade connectivity based on load measurements.
February 2, 2005
Negotiation and Coupling and Contracts
There needs to be a better set of principles for evaluating the relationship between the concepts of negotiation and coupling. Nominally, the two have been distinct concepts, but with autonomous agents in a dynamic environment, the distinction blurs or disappears entirely. Given the ephemeral nature of any coupling of entities in a networked environment, identifying and connecting two entities will by nature require some amount of negotiation, and the complexity of the environment, connection, and the negotiations will strongly suggest the importance of contracts between the entities to establish the basis for discussions about any proposed or actual or prior coupling.
One element of such a contract is "recourse", what happens when one party can't fulfill their side of the bargain. Any number of strategies can be considered. One that will hopefully see wide application is that of substitution, assigning a substitute or "successor" entity that can transparently fill in, with automatic redirection of the coupling, so that there is as little interruption of service as possible.
Binding: Coupling, Decoupling, and Re-coupling
In traditional software we classically seek to establish a binding between two entities as early as possible and as robustly as possible in order to guarantee the success of the task as quickly and durably as possible.
But the the world or autonomous agents, binding takes on an entirely different character. The task is less-well defined, more dynamic, more uncertain, and more open to emerging opportunities. A local optimization (early binding) may not be optimal compared to a continually dynamic optimization process. Even the loose or late binding of object-oriented programming and type-less languages (e.g., LISP) is too fragile for autonomous agency.
Call it "binding on demand". Binding, or coupling, can happen any time and in a manner that is more of a global optimization, with probability and statistics more at play than a simple mechanical demand. Agents should be prepared for coupling to be intermittent, with periodic decoupling and potential re-coupling, with either the original entity or a surrogate or replacement, considered the norm rather than an exception.
Of course, the agent infrastructure should handle much of the housekeeping for such "dynamic coupling", so that developers can focus on the computation rather than the logistics of the computation's components.
In some cases, the infrastructure may pause the computation to await availability of a target entity. In other cases, an alternate, replacement entity will be requisitioned. And in some rare cases an exception will occur that the developer must handle, but that should be continued an undesirable and "fragile" development technique. In some architectures, the handling of coupling events may in fact be handled by other entities that perform application specific actions.
In some cases, an external entity or the agent infrastructure may actively intervene and force a substitute entity coupling to satisfy an optimization at a higher level. For example, the agent may be migrated to be closer to resources that it is accessing frequently. In any case, the net result is that the original agent can continue working in an optimal manner without excessive attention to error-prone optimization within that agent itself. The external overhead can in fact be amortized across an entire population of agents.
January 1, 2005
Interaction of software agents raises the question of the extent to which the state of the interacting agents becomes intertwined.
Cloning a software agent also raises entanglement issues since the cloned agents initially share all or at least some part of their state.
When large numbers of software agents are interacting or working as a partially-coordinated system of agents, it no longer is practical to deal with the totality of the agents in a discrete manner. Statistical methods have some potential. In fact, statements about the state of the entire agent system may any be possible only on a statistical basis.
Agent System Infrastructure Architecture
The following infrastructure capabilities may be needed to properly support the deployment of an agent system:
September 21, 2004
Advisors and Counselors for Software Agents
Maybe software agents need to be designed on the assumption that they must continually respond to one or more "advisor" or "counselor" input channels from advisory entities which monitor the agents and the environment and offer high-level advice that advised agents can then factor into decisions. This is not "control" of the agent, or at least not "hard control", but simply emphasizes that agents are part of a loose system of confederacy with complex rules whose complexity coupled with the complexity of the dynamic environment make it difficult for an agent to be "all-knowing" without "advice". This is similar to the real-world role of lawyers and consultants.
An agent may also commission its own sub-agents to go out and gather information and in turn offer advice, but the goal is to have at least some advice which can be depended on even if the agent does not have its own advisory "staff" of sufficiently high quality.
September 18, 2004
Evolution to Evolutionary Programming
I am not convinced that there is any pure form of evolutionary programming. Rather, there are a number of characteristics and level through which we may evolve towards higher levels or forms of evolutionary programming. Here are some of the levels that we have used traditionally and may evolve through in the future.
September 6, 2004
Agent Interaction Languages and Agent Interaction Machines
In automata theory we have the paired concepts of machines and the languages that those machines recognize. If an agent is a 'machine', then we ask what 'language' that machine recognizes. As a starting point, we have a choice of defining 'agent machines' (or agent interaction machine) and then proceeding to determine what languages an agent machine would recognize (agent interaction language), or we can define agent interaction languages and then proceed to determine the characteristics of the machines (automata) needed to recognize those languages.
This is an open problem. The central issue is the concept of 'agency' and what it means to 'compute' or support that concept. In other words we do need to be able to construct 'tests' for agency and possibly have a spectrum of levels of agency or at least the two concepts of 'strong agency' and 'weak agency'.
September 6, 2004
Program vs. Process
I had been thinking of a software agent as a specialized form of computer program, but then I realized that there is a distinction between a computer program and a process. The program is a static definition, but the process is the program in action. The fact that a given program is a software agent can only really be determined by observing the program in action (the process) and testing and analyzing that behavior to determine that the process does in fact exhibit a significant and sufficient subset of the capabilities of software agency.
Presumably a program could have the logic needed for a software agent, but have bugs or options that result in the suppression of the behavior necessary for a software agent. So if the process does not "look" like an agent and quack like an agent, then how could it be any agent?
Still, I'm led back to concluding that a software agent is still a computer program and we distinguish between the definition of the software agent (the program) and the running software agent (the process).
March 22, 2004
Is a Web Service a Software Agent?
With the advent of Microsoft's push into "web services" (not intending to slight efforts by Sun, Netscape, and others), the question is whether software with a "web service" interface constitutes a software agent. My short answer: usually not, because there is usually not a user who has initiated the web service and on whose behalf (and under whose control) the web service operates. In fact, programs that communicate with a web service are more likely to be software agents (although not necessarily since a lot of web services will be directly communicating with other web services). My thought is that many web services are simply another layer of front end on top of what would otherwise simply be a server-based application server. Just because a program runs in the background unattended by a user does not make it an agent. Classically we called them daemons. They are a valuable capability, but are typically so controlled and situated on a server (under the eagle eye of a "system administrator" or operator) that it is difficult to suggest that they have autonomy.
March 22, 2004
Is it an Agent or merely Agent-like?
My core definition of a software agent is that it is a computer program that embodies the concepts or characteristics of software agency. But, there is the matter of degree. A computer program might use agent-like features in a very minimal manner (such as an email alert), but otherwise not look like an agent at all. I would call that an agent-like program, but not an agent. A computer program whose primary interface to the outside world consists of agent-like features would clearly be justified as being called a full-blown software agent. In between is a vast gray area. I would counsel that if there is any doubt at all or if the program primarily has a non-agent interface, then it probably shouldn't be classified as a software agent.
There is a related question of whether a particular "feature" of an application is a software agent. For example, if a server-based application has a separate background task whose sole responsibility is to monitor data conditions and then send out email alerts to users who have registered for these alerts, isn't that specific task much more of a software agent? My short answer: No, because that program is running under the direction and control of the application server, not any specific user. The marketing guys may refer to the alerting features as an "agent", but that's a loose usage, not proper technical terminology.
March 22, 2004
Alerts versus Agents
The concept of a software alert has become quite popular these days, especially with the advent of the web. The question is whether an alert constitutes an agent. My short answer: No, but it is a form of agency and a computer program that supports a form of agency can at least claim to be agent-like.
That said, the addition of agent-like capabilities to traditional software systems may in fact be an interesting back-door approach to enabling software agent technology to filter into the mainstream.
But I would surmise that at some point people will realize that software architectures that are based on retrofitting of software agent technology are distinctly sub-optimal and at that stage people will become more open to looking at how to architect software based more on a pure agent paradigm (e.g., replace the monolithic app with an "armada" of agents which work independently but collectively accomplish the intended function of the original app). But I don't expect that to happen any time soon (i.e., not in the next two or three years). Meanwhile, people with focus on .net, Java, and ad-hoc architectures.
February 15, 2004
Agents that Learn vs. Agents for Learning
It is widely accepted that advanced software agents should have the capability of learning or self-training so that they can deal with the ever-changing dynamics of the real world without cumbersome manual re-programming. There is also a growing interest in eLearning or what in the old days was called Computer-Aided Instruction (CAI), basically using computer programs to assist real people in learning and training. The question is whether these two sub-fields are mutually exclusive or whether there is some potential synergism and even some software that could be shared.
To be sure, agent learning and eLearning have different goals (discovering knowledge and converting it into software versus imparting existing knowledge to a person). It would seem that knowledge is knowledge and that there should be some commonality, but that conjecture remains to be validated. Maybe there is a kind of knowledge that only agents can utilize or that only people can utilize, but that's another conjecture that remains to be validated.
There are three distinct forms of agent learning. First, an agent can observe its environment and deduce knowledge which it can then apply to future situations. Second, a knowledge base could be produced (either by people, non-agent software, or other agents, or some combination of the above) and the learning agent is simply searching and filtering that existing knowledge to build up a new mental model which can be incorporated into the agent. Third, agents can share knowledge, directly or indirectly, because the agents may have common code or a common knowledge interface that would enable a "knowledge implant". It would seem that the process of producing that knowledge base would be somewhat similar to producing a knowledge base suitable for an automated eLearning system. Of course, we do need to recognize that much of the substance of an eLearning system concerns itself with how the knowledge is presented.
I would presume that an agent-oriented knowledge base could well be radically different than an eLearning-oriented knowledge base, but that's an untested conjecture as well.
What I do know with certainty is that it would be advantageous to investigate the potential for sharing of software and techniques between agent learning (aLearning?) and eLearning.
Ultimately, any synergy may break down over a crucial distinction: aLearning is primarily concerned with discovering wholly new knowledge and the actual "imparting" of the knowledge is a trivial technical detail, versus eLearning which deals with a body of existing knowledge and focuses on the difficult issues of how individual "learners" cope with and adapt to lessons that are presented to them. Nevertheless, it would be potentially valuable for eLearning to focus more attention on automation of the knowledge base (especially updating), and for aLearning to focus more attention on how "raw" knowledge is best transformed into specialized knowledge format that are more easily digested by simpler software agents.
There is another thorny issue for aLearning: unless a set of agents are going to have identical code and knowledge, there is always the possibility that the knowledge from one agent cannot be simply implanted into another agent, but may need to be transformed and filtered so that it is compatible with knowledge that may be pre-existing within the learning agents. This, we may come around to the eLearning problem of focusing on how to present the accepted knowledge in a form that is acceptable to the learning entity. Knowledge is based on concepts and different agents may already "know" different or incompatible variations of the required concepts.
That leads me to conclude that eLearning can learn from aLearning and aLearning can learn from eLearning.
February 13, 2004
Code Considered Harmful
Probably the thorniest issue for autonomous software agents is the difficulty of proving that the code for the agent both meets its specification and that the specification is "correct". The main problem is that hand-coded agent logic is inherently error-prone. In the early stages of the development of the software agent industry that may not be so much of a problem given the high-powered talent being thrown at the problem, but as the industry matures and grows to the stage where the great "unwashed masses" of "professionals" begin to cobble together agent logic, then the inability to guarantee that agent logic is flawless becomes a real problem.
The solution is twofold: move towards a declarative style of logic that is both less error-prone and also more easily proven, and develop a much richer software agent infrastructure so that most of the grunt (error-prone) logic is pushed down into the common, shared infrastructure where more high-powered technologists can apply the necessary intellectual power to assure that the "write one, utilize many times" code really is "correct".
The third element is that the specification of the agent must be externally available so that the software agent infrastructure can monitor the execution of the agent and validate that it adheres to the specification.
February 12, 2004
Is it a Buggy Agent or a Malicious Virus?
It may not be technically feasible to automatically detect whether a dysfunctional software agent is simply one with some careless bugs or whether the code was dysfunctional by intent, with that dysfunction being malicious. Of course most software professionals can probably tell from observation of the effects of an agent whether it is merely doing "dumb" things due to bugs versus doing clearly harmful actions, trying to codify that common sense into automated software and trying to predict the behavior of an agent is a completely different story.
Also, a deceitful rogue agent might behave normally most of the time and only occasionally engage in its malevolent behavior.
Also, as software systems get much larger and much more complex (e.g., with emergent behavior) and the behavior of a single agent becomes much more atomic, there is the distinct possibility that a virus "system" could be constructed where each sub-agent is clearly benign, but the malevolent behavior emerges from the combined behavior of the agents.
February 11, 2004
Would You Trust a Software Agent?
Why would you be willing to trust a software agent? How could you trust a software agent? Why would anybody trust a software agent? Would you trust a software agent to carry out a financial transaction? Would you trust a software agent to do something important? Superficially, these are great and profound questions. But at a deeper level they are truly nonsensical.
First, the questions beg the question of what a software agent really is. People asking those profound questions are probably merely assuming that a software agent is just a marketing/jargon buzzword for just another packaging of computer software.
But substitute "person" for "software agent" and ask the profound questions again. Now, clearly, we immediately leap to the conclusion that we can only trust a person if that person inspires trust and can offer some proof that they are trustworthy. Or maybe, some trusted third party vouches for them.
Now, it becomes clear that we can trust a software agent if and only if either someone we do trust recommends that we trust the software agent or the software agent itself is able to directly (or indirectly through some sort of verification service) offer up some credentials that we can both comprehend and trust. If we trust the recommendation or the credentials, then we can trust the software agent, and if we don't have a trustworthy recommendation or don't have faith in the credentials, than we can't (and shouldn't) trust the software agent.
Clearly, the implied requirements for viable software agent technology are that the software agent must be able to directly (or indirectly through a trusted verification service) present believable credentials, the software agent must adhere to the details of those credentials, and that a robust software agent technology infrastructure must be in place to assure that each software agent adheres to its credentials. An implied requirement is that a "network of trust and reputation" be built up over time so that a prospective user of a software agent can query that network both for the basic credentials as well as reports (positive or negative) of previous users.
In some sense there is at least one expectation to the above: if you are working within a business or other organization and you are told to use (and hence implicitly trust) a software agent that is either proprietary to the organization or specifically selected by the organization. Although this could be similar to a third-party recommendation, the user may essentially have no true choice and must use the software agent regardless of whether there is any sense of trust established. This is a dangerous situation, but will be all too common until software agent technology matures sufficiently.
Cynics would argue that our track record with software over the past fifty years strongly suggests that you cannot and should not trust any software. Part of the truth of that objection rests completely on the fact that we have never had a strong enough incentive to carry out the necessary tasks to construct a system of trustable software. The functional demands and autonomous operation of software agents do indeed now offer such incentives. But only with a culture of trust and a robust software agent infrastructure that enforces trust, can true trust come into existence.
February 11, 2004
Agents Among the Madding Non-Agent Crowds
With the heightened interest in e-commerce, web services, and XML, the field of distributed computing will be quite crowded indeed, even before the first agent arrives. Even legacy applications will be talking XML and negotiating and contracting for web services. So, software agents per se are not going to be introducing many of those concepts to the computing world. Sure, agents can work with the new web services, but as far as the web service is concerned, the agent could just as easily have been a legacy program or a "new" program situated on a server. Software agent technology still has great potential, but a fair amount of its thunder has been stolen by many of the features of the newer web services. But that's okay. A lot of the web services work was influenced by the early agent research.
Software agents and their researchers will simply have to learn to compete in a crowded market.
It may also be that this is a call for software agent researchers to focus anew on further and higher objectives.
January 31, 2004
Agents at All Levels of the System
If software agent technology is really up to snuff, then the agent paradigm should be seamlessly applicable to all levels of system design, from hardware, operating systems, middleware, applications, web services, and user interfaces.
The degree to which a software agent concept seems applicable to only one domain of software design is the degree to which the concept is not very useful.
The agent paradigm should be applicable at the hardware instruction set level, with individual instructions being parceled out to hardware agents, with each agent executing independently, possibly communicating with other agents, and then returning results to be integrated into the process state.
The traditional operating system concept of a "process" needs to be upgraded to fully support the capabilities of software agents.
January 31, 2004
Using Human Assistance to Supplement Software Agents
Although we would certainly like to completely automate many tasks, some are simply beyond the capabilities of current software technology, so it would seem that a partnership between humans and agents would be advantageous. The idea is that a software agent is given a goal, breaks the goal down into tasks or sub-goals and then contracts out a sub-goal to a pool of human agents who can than work on the sub-goal or task and deliver results back to the agent.
This could work in the opposite direction as well, where a human agent contracts out to a software agent to pursue some goal, all as part of a larger agent-based system. Another variation is the interface between two active agent systems, where people might negotiate the interface or at least one person negotiates with an agent.
Be careful not to confuse a human agent with the "user". Each human agent can be thought of a user in their own right, especially if you think of a recursive sub-dividing of the problem to be solved. Maybe the important think is to understand when you are dealing with agents (human or software) in a hierarchy with delegation of responsibilities and when you are dealing with agents in a network or community where there aren't clear hierarchical roles.
January 31, 2004
Duality of a Software Agent
The single, most critical problem with a software agent is assuring that it operates correctly and is proceeding towards its goal(s). With non-agent software a human being can intervene and basically say "Hey, something doesn't look right". So, we need an automated version of that human monitoring and intervention. To me, that implies that there always has to be a "second" or "other" piece of software that knows of the agent and what it is trying to accomplish, to a sufficient level of detail that the "agent monitor" can decide when the agent has become dysfunctional.
It seems to me that this "duality" is the heart and soul of a successful software agent design and implementation. Without it, you are truly flying blind.
If traditional software quality assurance (SQA) we write test software to statically "put the software through its paces". Some advanced systems may have diagnostic or tuning software to "check up on" running, production software, but it seems to me that the state of the art for such software simply isn't up to the task of handling the complexity inherent in software agent technology, especially when non-deterministic and heuristic techniques and even full-blown "artificial intelligence" technology is utilized.
I'm tempted to refer to this as a dichotomy between a software agent and its alter-ego.
Updated: February 08, 2006 09:13:41 PM -0500
Copyright © 2004 John W. Krupansky d/b/a Base Technology