TechWhirl (TECHWR-L) is a resource for technical writing and technical communications professionals of all experience levels and in all industries to share their experiences and acquire information.
For two decades, technical communicators have turned to TechWhirl to ask and answer questions about the always-changing world of technical communications, such as tools, skills, career paths, methodologies, and emerging industries. The TechWhirl Archives and magazine, created for, by and about technical writers, offer a wealth of knowledge to everyone with an interest in any aspect of technical communications.
I spent two years writing and editing troubleshooters for Microsoft. These
troubleshooters use similar technology on the back end to the Office
Assistant to make them "intelligent," but didn't use the much-maligned Agent
technology.
The troubleshooters (and the Office Assistant/Answer Wizard) use
sophisticated Bayesian modeling to "guess" the probability that each given
help topic is the one you're looking for, based on what you're doing, or
what the most likely cause of your problem is, based upon the symptoms
you've already given.
Microsoft is quite proud of the technology, though it hasn't gotten much
press. You can read about it at http://research.microsoft.com/dtas. But it's
not as simple as throwing together a help file. The Answer Wizard takes one
or two people working full time over the course of the project, just to
generate all the queries and make it all work. Troubleshooters are extremely
labor intensive, and the organization is backwards compared to every SME's
way of thinking. You generate them by figuring out all of the possible
causes, and work forward through all of the symptoms. The troubleshooter,
when running, then reverses it and checks symptoms first and then evaluates
causes.
All of us working on the troubleshooters had at least a two month learning
curve before we were making the tool do what we wanted and could generate
anything productive. I think the end result, however, can be very helpful
for users diagnosing problems.
I don't know whether Microsoft has made their Answer Wizard technology
available, and I haven't worked with it personally, but you might search the
MSDN site to find out if they have. It sounds like a few other HATs provide
similar functionality, (I have a demo CD for DocToHelp that claims to do
this) but I don't know how the systems compare.
Early versions of the troubleshooter tools are available at the Research
site listed above, though it's currently licensed only for acadamic use, and
the tools to compile and execute troubleshooters are not available--yet. One
of my current projects is creating online help for a new troubleshooter tool
that greatly streamlines the process.
So, to answer your specific questions:
> 1) How your programmers implemented the system.
They basically built an ActiveX control that serves up pages on a local
machine, based on decisions made by the troubleshooter engine, which was
written by the Research group. Very code-intensive, many man-years, lots of
testing, and still has problems.
The Answer-Wizard technology, if available, I believe is substantially more
mature, with no coding necessary. Instead, a help author must generate
Answer Wizard files that index the help system appropriately. Testing
queries and adjusting the aw files is the bulk of the work, I believe.
> 2) How you revised your writing so that the help topics work within this
> type of system. (I am already planning a very granular, structured writing
> approach that facilitates single-sourcing in Frame with WebWorks).
Structured information is essential for these help systems. You're on the
right track, here. For troubleshooters, we broke down information into
causes, symptoms, configurations, and problems. We then combined these in
different ways on different pages.
> 3) How you feel about assigning personality to agents (from what I gather
> they are universally hated, but maybe providing one as an option wouldn't
> hurt . . .)
I used to hate 'em, until I learned about the Bayesian stuff behind the
scenes. You know, the Office Assistant, when it's showing, is actively
trying to guess what task you're trying to do all the time? It's also trying
to gauge whether or not you're getting frustrated. When the frustration
gauge reaches a certain value, the assistant interrupts, asking if you would
like help, and then it lists what it thinks are the most relevant topics,
based on your mouseclicks and actions? Cool! Too bad it only does simple
stuff, and most of its suggestions are worthless. But now and then I learn
something from it, and I don't mind Einstein or Will Shakespeare sitting
there anymore... I'm sure it will improve over time. But I'm in the
minority, I know...
> 4) Any other insights or comments that you may have are also welcome!
In short, implementing a "smart" help system is a major development effort.
If any of the existing technology is available, use it--unless you have a
PhD in Artificial Intelligence on staff and a dozen person-years available
to code, author, and test...
Ellen Kelly wrote:
>
> Now to my question . . .I am going to propose a "smart help"
> system - along
> the lines of Microsoft help agents, but without the personality. I just
> want the users to be able to type a question, and then link to our help
> system.
>
> I am curious to see if anyone has worked on developing a smart help,
> agent-based type of system, and if so, would you be willing to share your
> experiences? We are looking at this as part of the program,
> actually, so in
> the spec. it will be assigned development person hours and QA time - the
> full works.
>
>