The Sunday TimesPlus

11th May 1997

| TIMESPORTS

| HOME PAGE | FRONT PAGE | EDITORIAL/OPINION | NEWS / COMMENT | BUSINESS

Vegi on the world wide web

By Prof. Mahinda Palihawardane

If you are a vegetarian and want to know the latest about vegetarianism world-wide, break into the World Wide

web at http: //www.veg.org/Docs/vegindex. html. You are in for quite a surprise. Last updated March 29, 1997 (it may have been updated again by the time this appears in print!).

This "Mega Index to Vegetarian Information" is described as ‘tool’ for finding information on issues relating to vegetarianism. Among others the Index lists the following:

Beginners: The Good Veg Guide (Viva!); Beginners Start here (VegSocUK)
Cholesterol: Fats and Cholesterol (VegSocUK); Controlling Cholesterol (PCRM)

Definitions: Glossary of Vegetarian Terms ( VegPages)

Events: Vegetarian Events (VegPages) FAQS: Frequently Asked Questions (VegPages)

Health: Vegetarian Pages Health References

History: History of Vegetarianism (ARRS)

Leather: Alternatives to Leather (VegsocUK)

News: Vegetarian News (VegPages)

Nutrition: Vegetarian Pages Nutrition References

Organisations: Vegetarian Organisations Around the World (WGTV)

People: Famous Vegetarians; Vegetarians on the Net (VegPages)

Publications: Vegetarian Books and Software (VegPages)

Religion: Religion and Vegetarianism (VegPages)

Sports: The Food of Champions (Viva!)

Travel: World Guide to Vegetarianism (WGTV)

Vitamin B12: Vitamin B12 facts for Vegetarians (PCRM).

In the above list ARRS stands for Animal Rights Resource site; PCRM: Physicians’ Committee for Responsible Medicine(USA); Viva!; Vegetarians’ International Voice for Animals; WGTV; WORLD Guide to Vegetarianism. The other abbreviations are self-explanatory.

http/ www. veg.org/veg Guide/Internet/www. html: This World guide to Vegetarianism provides some very exciting news.

For example it lists http//www. vegweb.com/which is described as "The most popular vegetarian web site. Web based message board; over 2000 vegan recipes, composting guide, events, lists glossary, book reviews...

The web site of the Vegetarian Society UK supplies some of the most interesting data. (http//www veg.org/orgs/VegSocUK/). It lists "lots of useful information.

Contains the entire collection of VegSocUK Infosheets, reports articles campaigns, local resources information for youths, new vegetarians, recipes, information on ingredients".

The web site http// www.veg.org/OrgsIVU is that of the International Vegetarian Union. This is the umbrella group for vegetarian organisations.

A support network for vegetarian teens world-wide, "run entirely by and for vegetarian and vegan youth" is the Vegetarian Youth Network. Its web site is http// www geocities. com/ RodeDrive/1154.


Now I can make my computer do anything I want

For certain young peo ple computers are more than a way to play the latest video games. Even more than a way into today’s competitive job market. They are the machines that give them a voice.

These youngsters are as keen to communicate and make their way in the world as any others. But they have severe disabilities, are confined to wheelchairs and, not so long ago, would have been written off by potential employers.

Jo Jones is one of these young people. She is 20, suffers from multiple disabilities and has spent her life in a wheelchair. Advances in voice-activated software have changed her life and those of many like her.

Companies such as Dragon Systems and PS Training have made it possible for those with restricted movement and impaired vision to use a computer with the same freedom as an able-bodied person.

"I couldn’t type and 1 couldn’t see properly, so I couldn’t work or use a computer- and now I can, Jo says. "With Dragon Voice, I can make the computer do anything I want".

This sort of technology is helping severely disabled people learn skills that can take them into the jobs marketplace. One school determined to give its pupils-including Jo-a chance of future employment is Chailey Heritage in East Sussex. Founded in 1903, Chailey is an Non-Maintained Day Special School for children and young people aged from three to 20 or more, all with severe and multiple physical and learning disabilities.

From an early age pupils are encouraged to use computers as part of their everyday life. For many it is an opportunity to develop their creative side.

Severely restricted mobility has not stopped Louisa Makolski, 14, from getting to grips with technology - her father spent a lot of time developing a system of chin-operated controls specially for her. Now the IT industry is also helping.

This week the Dell Computer Corporation gave the staff and children at Chailey a new Dell Latitude XPi P133 notebook computer capable of running Drag on Voice, Nathan Wakeford, 13, was. delighted. "I am sure this will . make schoolwork a lot easier," he said. "The notebook looks really good fun. Peter Watts, Dell Latitude product manager, says the notebook is perfect for the children.

"It has five hours of battery life and a further five hours of hot swap time, which will be ideal if it is being used from a wheelchair which might get tangled up with a mains lead," he said.

"We at Dell are looking forward to seeing mobile computing making a real difference to the Heritage."

London-based suppliers of Dragon Systems Software, PS Training, have donated a copy of the latest version of Dragon Voice, coupled with their own EziSpeech software, designed to help anyone learn to use a computer quickly using only voice commands.

Janet Duchesne, managing director of PS Training, said: "It really does allow users to operate any Windows-based software simply by using their voice. Dragon Voice and EziSpeech learn the way a user speaks, without the user having to speak clearly or even loudly."

Regional accents, and even speech impediments, are not a problem as the software associates individual words with specific sounds. Dragon translates those sounds into words on the screen. Where a word has a number of different meanings, Dragon looks at the words around it for clues on the context.

Headmaster Alistair Bruce is excited about the possibilities of the software. "This will be wonderful for some of the children who have no mobility and also have difficulty with language" he says. "It will help them realise their full potential."

At Chailey, all children capable of interacting with a computer are encouraged to use one, although some need special aids. Volunteers who staff the special needs workshop have produced some remarkably ingenious solutions to mobility challenges posed by individual children.

A series of infra-red sensitive tracks form the basis for an automated wheelchair guidance system which does not require a helper to push the wheelchair user. The tram lines run between the school buildings, linking therapy rooms. classrooms and communal areas.

Chailey pupils are lucky enough to have computer-aware people in command - the Heads wife, Patricia Bruce is in charge of IT- and the support of a highly active parent/teacher group. And an army of volunteers helps the children with swimming, horse riding and other activities.

The support does not stop when a pupil leaves the education system: the Chailey Heritage Enterprise Centre was opened recently to provide a place where people with disabilities, up to the age of 25, can meet, get extensive training in hi-tech skills and gain confidence.

"We’re here to give them hope for the future", says headmaster Alistair Bruce. "And technology is at the root of that hope."-Interface/Times


Searching the Internet

Combining the skills of the librarian and the computer scientist may help organise the anarchy of the Internet

One sometimes hears the Internet characterized as the world’s library for the digital age. This description does not stand up under even casual examination. The Internet _ and particularly its collection of multimedia resources knows as the World Wide Web _ was not designed to support the organized publication and retrieval of information as libraries are. It has evolved into what might be thought of as a chaotic repository for the collective output of the world’s digital "printing presses.’ This storehouse of information contains not only books and papers but raw scientific data, menus, meeting minutes, advertisements, video and audio recordings, and transcripts of interactive conversations. The ephemeral mixes everywhere with works of lasting importance.

In short, the Net is not a digital library. But if it is to continue to grow and thrive as a new means of communication, something very much like traditional library services will be needed to organize, access and preserve networked information.

Even then, the Net will not resemble a traditional library, because its contents are more widely dispersed than a standard collection. Consequently, the librarian’s classification and selection skills must be complemented by the computer scientist’s ability to automate the task of indexing and storing information. Only a synthesis of the differing perspectives brought by both professions will allow this new medium to remain viable.

At the moment, computer technology bears most of the responsibility for organizing information on the Internet.

In theory, software that automatically classifies and indexes collections of digital data can address the glut of information on the Net - and the inability of human indexers and bibliographers to cope with it.

Automating information access has the advantage or directly exploring the rapidly dropping costs of computers and avoiding the high expense and delays of human indexing.

But, as anyone who has ever sought information on the Web knows, these automated tools categorize information differently than people do.

In one sense, the job performed by the various indexing and cataloguing tools known as search engines is highly democratic.

Machine-based approaches provide uniform and equal access to all the information on the Net. In practice, this electronic egalitarianism can prove a mixed blessing.

Web "surfers" who type in a search request are often overwhelmed by thousands of responses. The search results frequently contain references to irrelevant Web sites while leaving out others that hold important material.

Crawling the Web

The nature of electronic indexing can be understood by examining the way Web search engines, such as Lycos or Digital Equipment Corporation’s AltaVista, construct indexes and find information requested by a user. Periodically, they dispatch programs (sometimes referred to as Web crawlers, spiders or indexing robots) to every site they can identify on the Web - each site being a set of documents, called pages, that can be accessed over the network. The Web crawlers download and then examine these pages and extract indexing information that can be used to describe them.

This process - details of which vary among search engines - may include simply locating most of the words that appear in Web pages or performing sophisticated analyses to identify key words and phrases. These data are then stored in the search engine’s database, along with an address, termed a uniform resource locator (URN), that represents where the file resides. A user then deploys a browser, such as the familiar Netscape, to submit queries to the search engine’s database. The query produces a list of Web resources, the URLs that can be clicked on to connect to the sites identified by the search.

Existing search engines service millions of queries a day. Yet it has become clear that they are less than ideal for retrieving an ever growing body of information on the Web. In contrast to human indexers, automated programs have difficulty identifying characteristics of a document such as its overall theme or its genre - whether it is a poem or a play, or even an advertisement.

The Web, moreover, still lacks standards that would facilitate automated indexing. As a result, documents on the Web are not structured so that programs can reliably extract the routine information that a human indexer might find through a cursory inspection: author, date of publication, length of text and subject matter (This information is known as metadata.) A Web crawler might turn up the desired article authored by Jane Doe. But it might also find thousands of other articles in which such a common name is mentioned in the text or in a bibliographic reference.

Publishers sometimes abuse the indiscriminate character of automated indexing. A Web site can bias the selection process to attract attention to itself by repeating within a document a word, such as "sex," that is known to be queried often. The reason: a search engine will display first the URLs for the documents that mention a search term most frequently. In contrast, humans can easily see around simpleminded tricks.

The professional indexer can describe the components of individual pages of all sorts (from text to video) and can clarify how those parts fit together into a database of information. Civil War photographs, for example, might form part of a collection that also includes period music and soldier diaries. A human indexer can describe a site’s rules for the collection and retention of programs in, say, an archive that stores Macintosh software. Analyses of a site’s purpose, history and policies are beyond the capabilities of a crawler program.

Another drawback of automated indexing is that most search engines recognize text only. The intense interest in the Web, though, has come about because of the medium’s ability to display images, whether graphics or video clips. Some research has moved forward toward finding colours or patterns within images. But no program can deduce the underlying meaning and cultural significance of an image (for example, that a group of men dining represents the Last Supper).

At the same time, the way information is structured on the Web is changing so that it often cannot be examined by Web crawlers. Many Web pages are no longer static files that can be analyzed and indexed by such programs. In many cases, the information displayed in a document is computed by the Web site during a search in response to the user’s request. The site might assemble a map, a table and a text document different areas of its database, a disparate collection of information that conforms to the user’s query. A newspaper’s Web site, for instance, might allow a reader to specify that only stories on the oil-equipment business be displayed in a personalized version of the paper. The database of stories from which this document is put together could not be searched by a Web crawler that visits the site.

A growing body of research has attempted to address some of the problems involved with automated classification methods. One approach seeks to attach metadata to files so that indexing systems can collect this information.

The most advanced effort is the Dublin Core Metadata program and an affiliated endeavor, the Warwick Framework - the first named after a workshop in Dublin, Ohio, the other for a colloquy in Warwick, England. The workshops have defined a set of metadata elements that are simpler than those in traditional library cataloguing and have also created methods for incorporating them within pages on the Web. -Scientific American


Continue to Plus page 11 - People and Events

Return to the Plus contents page

Read Letters to the Editor

Go to the Plus Archive

Sports

Home Page Front Page OP/ED News Business

Please send your comments and suggestions on this web site to
info@suntimes.is.lk or to
webmaster@infolabs.is.lk